Abstract
At present, significant progress has been made in the research of image encryption, but there are still some issues that need to be explored in key space, password generation and security verification, encryption schemes, and other aspects. Aiming at this, a digital image encryption algorithm was developed in this paper. This algorithm integrates six-dimensional cellular neural network with generalized chaos to generate pseudo-random numbers to generate the plaintext-related ciphers. The initial image matrix is transformed into L-matrix and U-matrix through Lower-Upper decomposition. These matrices are then encrypted simultaneously with distinct cipher sequences. The algorithm's feasibility and security are demonstrated through comprehensive encryption simulations and performance analysis. The paper's contributions include i) the cellular neural network and an innovative chaos approach to develop a new ciphers scheme; ii) the image decomposition encryption effectively shorten the cipher length and reduce interception risks during transmission; iii) the frequent application of nonlinear transforms enhances the structural complexity of the cryptosystem and fortifies the security of the algorithm. Compared to existing algorithms, the paper achieves a novel image decomposition encryption mode with comprehensive advantages. This mode is expected to be applied in image communication security.
Keywords: Image encryption, security analysis, cellular neural network, LU decomposition, non-linear transforms
Introduction
Digital image encryption serves as a crucial method for protecting image information, it is typically conducted on digital computer systems, involves pixel confusion and diffusion processes.1,2 Supporting techniques in digital image encryption often include chaos, plaintext-related cipher, confusion (scrambling), and diffusion.3,4 Chaos and plaintext-related cipher are utilized to generate cipher sequences, while confusion and diffusion are applied for pixel grayscale conversion and coordinate transformation.5,6 Recent advancements in this field have been noteworthy. Feng et al. 7 presented a novel multi-channel image encryption algorithm, and their algorithm uses pixel reorganization and two robust hyper-chaotic maps to encrypt input images. In another paper, Feng et al. 8 constructed a robust hyper-chaotic map and developed an efficient image encryption algorithm based on the map and a pixel fusion strategy. Recently, Raghuvanshi et al. 9 introduced a novel, more stable, secure, and reliable image encryption model. This scheme combines a convolution neural network model with an intertwining logistic map to generate secret keys. Additionally, DNA encoding, diffusion, and bit reversion operations are applied for scrambling and manipulating image pixels. During the same period, Soniya Rohhila and Amit Kumar Singh 10 studied a comprehensive survey of recent digital image encryption using deep learning models. They discussed various state-of-the-art deep learning-based encryption techniques. Besides, many recent related works11–15 have significant research value and deserve attention, discussion, and learning. The research topics of these works include chaotic systems, image encryption schemes, cryptographic analysis, and more.
Currently, significant advancements have been made in the field of digital image information protection. However, certain challenges in this domain cannot be overlooked. These include (i) In 2018, Mario, Thomas, Stefan, et al. highlighted the necessity of ensuring both cipher and algorithm security in image encryption. 16 Regrettably, most existing image encryption algorithms focus solely on algorithm security, neglecting cipher security; (ii) When exposed to malicious attacks, schemes with limited key space or simplistic algorithmic structures are vulnerable; (iii) Conventional one-time-pad image encryption imposes stringent requirements on cipher sequence length. For example, traditional pixel diffusion based on add-and-modulus operations demands the cipher matrix match the size of the plain image. Therefore, cryptosystems should be capable of generating a substantial quantity of pseudo-random numbers. Designing a pseudo-random number generator with optimal statistical properties is challenging. A major issue is the difficulty in controlling the periodic variation of the sequence as the length of the generated pseudo-random sequence increases. Effectively and significantly reducing the length of the pseudo-random sequence remains an urgent problem to resolve; (iv) Almost all known schemes encrypt plaintext as a whole. This approach inherently risks one-time key and cipher-text leakage during information communication. If plain images are encrypted in a decentralized manner, the likelihood of comprehensive information leakage could be significantly diminished.
Presented in the paper is a digital image encryption scheme, underpinned by a six-dimensional cellular neural network (6D-CNN) 17 and augmented by lower-upper (LU) matrix decomposition. The 6D-CNN, integrated with a novel chaotic map, forms a cryptosystem with an extensive key space. The process combines matrix LU decomposition, pixel diffusion, and confusion to achieve image encryption. The primary features of the algorithm are summarized as follows:
The integration of CNN and chaos to generate a pseudo-random cipher sequence;
Applying image decomposition techniques to the field of image encryption, effectively shortening the cipher sequence and reducing the risk of information leakage;
The application of multiple nonlinear tools within the cryptosystem and encryption algorithm to increase structural complexity.
The paper is structured as follows: Algorithm preliminaries Section explains the mathematical underpinnings of the encryption scheme. Generating pseudo-random sequence and cryptosystem Section investigates the structure of the cryptosystem and the mechanism behind cipher formation. Algorithms for encryption as well as decryption Section delves into the intricacies of the image encryption algorithm. Encryption and decryption simulations Section showcases the simulation process of encryption and its results. Evaluation of algorithm performance Section evaluates the performance of the algorithm. The final section, Conclusion Section, provides a conclusion to the paper.
Algorithm preliminaries
In this section, the content focuses on the following four parts: i) The concept of 6D-CNN; ii) The definition of Extended Henon Map (EHM); iii) The method of matrix's LU decomposition and its computer implementation; iv) A new matrix nonlinear transformation and its’ calculation formula. Part 1 and 2 were jointly applied to password generation. Part 3 is the calculation foundation for subsequent image decomposition. Part 4 is served as an auxiliary tool for password generation.
Six-dimensional cellular neural network
The 6D-CNN, a continuous hyper-chaotic system, was introduced by Wang, Bing, and Zhang in 2010, building on the traditional CNN model 18 developed by Chua and Yang. The 6D-CNN is expressed as follows:
(1) |
As the number of iterations approaches infinity, the system's six Lyapunov exponents include two positive exponents and , thereby fulfilling the hyper-chaos sufficient conditions. 18
A novel chaotic map
The Henon map, commonly used to generate random sequences, is a well-known example of chaos.1,2 The Henon map, in its recursive form, is defined as follows:
(2) |
where are the free parameters. To enhance the complexity of Equation (2), it has been extended into a new form:
(3) |
where are additional parameters. This modified version, denoted as the EHM in Equation (3), becomes chaotic as the number of iterations approaches infinity, as indicated by its positive Lyapunov exponent. For example, when it is not hard to get the Lyapunov exponent Numerical experiments show that as long as the above parameters vary within the specified range, the corresponding exponents are always positive. This indicates that this Equation (3) satisfies the necessary conditions for chaotic systems. In fact, through spectral analysis, it can be rigorously proven that this equation forms a hyper-chaotic system. Due to space limitations, this section will not elaborate further.
LU decomposition of matrix and rearrangement of LU matrices
Matrix decomposition involves converting a given matrix into a product of matrices according to specific rules. Common matrix decomposition methods include LU decomposition, QR decomposition, Schur decomposition, etc. Some of these are suitable for square matrices, while others can be applied to rectangular matrices. 19 This subsection introduces the LU decomposition of rectangular matrices and a rearrangement scheme for LU matrices, laying the groundwork for subsequent encryption of images.
For illustrative purposes, suppose that is a matrix of specified dimensions, comprising :
(4) |
If matrix A has a generalized inverse, then it satisfies:
(5) |
where P is a matrix of permutation, L denotes a lower triangular matrix of , and U denotes an upper triangular matrix connected with . The expansions of L and U are given by 19 :
(6) |
A significant portion of elements in the L-matrix and U-matrix are zeros, indicating their sparse nature. In matrix operations, excluding these zero elements from calculations can save computational resources and enhance information processing efficiency.
Let represent the total number of elements in the lower triangle of L. If we use to denote the two factors of x with the smallest difference of absolute values, then the rearrangement result of L-matrix is unique. Similarly, for , the total number of elements in the upper triangle of U, and we use to denote the two factors of y with the smallest difference of absolute values. Consequently, the nonzero triangular parts of L and U are rearranged by columns into new forms:
(7) |
In subsequent sections, the matrices in Equation (7) are referred to as L-image and U-image, respectively.
In practice, LU decomposition is often applied to square matrices. In this case, the size of the matrix A is . Moreover, the size of the matrices in Equation 5 correspondingly becomes . In Matlab, the internal function “LU” can quickly implement LU decomposition of specified matrices.
For example, if
calling the function “LU” in Matlab, then A can be decomposed into , where,
The idea of using LU decomposition of matrices to implement digital image encryption is to encrypt the matrices L and U obtained by decomposing the original pixels matrix. This is a typical indirect encryption method. The detailed encryption and decryption algorithms and steps will be discussed in the following text. Due to the sparse nature of matrix P, it is ignored in encryption algorithm.
Matrix nonlinear transform
Suppose that is a matrix of specified dimensions, comprising: . This subsection defines a nonlinear transform of the matrix (NTM) and P is expressed as follows:
(8) |
where denotes the dot power operator, 20 are free parameters, and R is a nonzero random matrix of the same size as P. The inverse of NTM (NTM−1) is easily deduced:
(9) |
Generating pseudo-random sequence and cryptosystem
The process for generating a pseudo-random sequence is crucial in a cryptosystem. Despite significant achievements in this area, 21 the development of more secure pseudo-random number generators remains a pressing need in applied cryptography.
This section introduces a novel pseudo-random sequence generation mechanism based on 6D-CNN and EHM. To prevent chaos degradation due to finite precision, the outputs of 6D-CNN are numerically processed for the subsequent iteration. A nonlinear transform and a weighted combination of two pseudo-random sequences generated by 6D-CNN and EHM are used to create new pseudo-random numbers.
As per Equation (1), the 6D-CNN is described by a continuous first-order differential equation. For smooth chaotic sequence generation, the continuous form of 6D-CNN is discretized. Common discretization methods for first-order differential equations include the Runge–Kutta method, the Euler method, and the improved Euler method. The Euler method is employed here for simplicity.
(10) |
where h is the step length of each iteration.
Generating pseudo-random sequence
The mechanism to generate the pseudo-random sequence is as follows:
Initialize parameters and use Equation (10) to generate a chaotic sequence. Apply the following transform to the outputs of iteration:
(11) |
where denotes the decimal part of , and is the input of the iteration. Continue until the sequence length is sufficient. This sequence is denoted as
Set the parameters . Make use of Equation (3) to deduce another chaotic sequence of the same size as X and write it down as To eliminate the initial values effect, the first 1000 iteration values of Equation (10) are neglected. This means that X is assigned from the 1001st iteration.
Then, implement the transforms expressed in Equation (8) on the sequences X and Y, respectively, as follows:
(12) |
where denote free constants; are nonzero random matrices. As mentioned in Equation (18), the symbol denotes the dot power operator of the matrix, which means to perform a a -power operation on each element of the matrix
Next, convert the outcomes of Equation (12) to integer data:
(13) |
where is the downward rounding function, and is the modulus operator.
Finally, combine sequence and with weights:
(14) |
where is the weight. The sequence is the final pseudo-random sequence. In the following sections, is called the Pseudo-Random Sequence based on 6D-CNN and Chaos (PRSCC).
The key space of the cryptosystem
From Generating pseudo-random sequence Section, it is evident that 21 free parameters ( and ) are involved in PRSCC's generation. Assuming these parameters are 8-bit integers, the cryptosystem's key space reaches 2^168, far exceeding the safety standard of 2128. Thus, the cryptosystem is safeguarded against brute-force attacks.
National Institute of Standards and Technology randomness testing
SP800-22 R1a is an international standard of the randomness statistical testing for binary sequences. It is issued by National Institute of Standards and Technology. 22 In SP800-22, there are 15 testing items, and part of them includes several sub-items. The test outcomes of each item include two indicators: p-value and proportion. In general, the testing is performed for the known significance level and the confidence interval , where is the number of testing groups. If the p-value is greater than the significance level and the value of proportion is in the confidence interval, then the sequence is affirmed to pass the test. Suppose that the significant level is and the confidence interval is .1,16 For the 2000 PRSCCs in the length of bit, we calculate the p-values and their proportions in the confidence interval. The outcomes are displayed in Table 1. It is observed that all p-values exceed 0.01, with proportions consistently falling within the range of [0.9833, 0.9967]. Hence, we confirm that PRSCC is random.
Table 1.
SP800-22 R1a testing of PRSCCs.
Number | Index | Criterion | P-values | Proportion | Conclusions |
---|---|---|---|---|---|
1 | Frequency analysis | ≥0.01 | 0.3558 | 0.9894 | Passed |
2 | Block frequency evaluation | ≥0.01 | 0.4706 | 0.9858 | Passed |
3 | Run test | ≥0.01 | 0.7780 | 0.9909 | Passed |
4 | Analysis of maximum consecutive ones in a block | ≥0.01 | 0.1439 | 0.9854 | Passed |
5 | Rank of binary matrices | ≥0.01 | 0.3744 | 0.9928 | Passed |
6 | Application of discrete Fourier transform | ≥0.01 | 0.5814 | 0.9939 | Passed |
7 | Matching with Non-overlapping templates | ≥0.01 | 0.9644 | 0.9861 | Passed |
8 | Matching with overlapping templates | ≥0.01 | 0.6889 | 0.9878 | Passed |
9 | Maurer's universal statistical test | ≥0.01 | 0.2038 | 0.9915 | Passed |
10 | Assessment of linear complexity | ≥0.01 | 0.8089 | 0.9937 | Passed |
11 | Serial test | all ≥ 0.01 | *0.0238 | 0.9858 | Passed |
12 | Estimation of approximate entropy | ≥0.01 | 0.7154 | 0.9962 | Passed |
13 | Calculation of cumulative sums | ≥0.01 | 0.1709 | 0.9887 | Passed |
14 | Assessment of random excursions | all ≥ 0.01 | *0.0300 | 0.9846 | Passed |
15 | Variants of random excursions analysis | all ≥ 0.01 | *0.2095 | 0.9853 | Passed |
Notes: (i) Due to random factors in the calculations, outcomes may vary between rounds; (ii) An asterisk () indicates the minimum data for the respective item.
PRSCC: Pseudo-Random Sequence based on 6D-CNN and Chaos.
Algorithms for encryption as well as decryption
This section consists of the following parts: i) Image encryption scheme and its mathematical description, which includes the generation of ciphers related to plaintext, pixel diffusion and pixel scrambling; ii) Image decryption scheme and its mathematical description, it is the inverse course of image encryption; iii) The step-by-step description of image encryption and decryption algorithm; iv) The detailed diagram of encryption and decryption scheme; v) The overview of the main features of the proposed algorithms.
Encryption
Diffusion cipher related to plaintext and pixel diffusion
In order to generate a diffusion cipher linked to plaintext, the transformation defined in Equation (8) is applied between and , and also between and , as follows:
(15) |
where and are the PRSCCs of lengths m and , respectively. The outcomes and serve as the diffusion ciphers.
Traditional image encryption often employs basic diffusion schemes like pixel XOR or addition-and-modulus operations. 2 These operations, applied to single pixels, result in low computational efficiency. This paper introduces an XOR operation performed row by row, and also column by column:
(16) |
where are the row vectors of the cipher-text, cipher, and plaintext, and are the column vectors of the cipher-text, cipher, and plaintext, respectively. For matrices and , cipher-texts are computed as per Equation (16) to achieve pixel diffusion, with outcomes denoted as and .
The inverse diffusion process corresponding to Equation (16) is executed as follows:
(17) |
Pixel scrambling
Using Matlab's “randperm” function, two random positive integer sequences and within the intervals and are generated. Based on these sequences, the diffused images and are rearranged to form and , respectively. These images and are the final encrypted images.
Decryption
Decryption essentially involves reversing the encryption operations on the cipher image. Firstly, the cipher images and are rearranged based on the ascending order of the sequences and . This step retrieves the scrambled, decrypted images and . Next, applying Equation (17) retrieves the diffused decrypted images and . Finally, these matrices are converted into triangular matrices and as per Equation (6), and reconstructing the matrix A the plain image is reconstructed using Equation (5).
Algorithm description
-
Image decomposition
Step 1. Convert the original image to a grayscale matrix A.
Step 2. Decompose the matrix A using Matlab into L and U matrices.
Step 3. Transform L and U matrices into secondary plain images and , respectively.
-
Image encryption
Step 4. Generate the pseudo-random sequence as described in Generating pseudo-random sequence Section.
Step 5. Apply transformations in Equation (15) to compute diffusion ciphers and .
Step 6. Utilize Equation (16) to compute diffused images and ;.
Step7. Rearrange and into and , respectively, using the method in Pixel scrambling.
-
Image decryption and reconstruction
Step 8. Rearrange cipher-texts and to obtain scrambled decrypted images.
Step 9. Use Equation (17) to obtain diffused decrypted images C_1′ and C_2′.
Step10. Convert and into upper and lower triangular matrices and , respectively.
Step 10. Recompose the plain image as per the formula .
Algorithm flow chart
The algorithm flow chart is illustrated in Figure 1.
Figure 1.
The flow chart of the proposed algorithm.
Characteristics of the algorithm
The proposed algorithm exhibits its distinct features primarily in two aspects. First, a composite cryptosystem and cipher generation mechanism have been constructed based on high-dimensional CNNs and multiple degrees of freedom in chaos. This structure ensures the security of pseudo-random sequences and stream ciphers effectively. Second, the encryption process is applied not directly to the original plain image but to two derivative images, namely L-image and U-image. These images result from the rearrangement of L and U matrices, which are they decomposed from the plain image.
Advantages of these features include:
- Substantial reduction in cipher sequence length: For an image of size , the cipher sequence length required for direct encryption is also . In contrast, the proposed algorithm requires a cipher length of either or (refer to LU decomposition of matrix and rearrangement of LU matrices Section). When , is significantly larger than , and it is also bigger than ). For instance, when , the length of the direct encryption cipher is . However, the indirect encryption cipher length is approximately , roughly half of the direct encryption. Such a reduction in cipher length is notably beneficial in applied cryptography, where the challenge lies in generating a substantial volume of pseudo-random numbers with desirable statistical characteristics for a diffusion cipher linked to plaintext.
- Decreased computational load in decomposition encryption: The size of L-image or U-image is significantly smaller than the initial image, thereby reducing the computational demand. In traditional pixel diffusion schemes, the number of diffusion operations approximates the cipher length. This algorithm effectively halves the computational requirements.
- Reduced risk of interception during transmission: In public key cryptosystems, keys and cipher-texts are transmitted separately across two channels. Suppose that the probability of information being intercepted is in once transmission. Under normal conditions, the parameter should be a positive decimal very close to 0. Thus, in direct encryption, the probability of the keys and cipher-text being intercepted simultaneously is . However, in our encryption scheme, two groups of keys and two cipher-texts are, respectively, transmitted in four different channels. Therefore, the corresponding probability is . Since , this approach significantly lowers the overall risk of information leakage.
Encryption and decryption simulations
In this section, to verify the feasibility and effectiveness of the proposed algorithms, the simulations of encryption and decryption algorithms will be conducted in a specific experimental environment. The experimental images are selected from the international standard database for image processing.
Selected from the Caltech 101 universal image dataset, 23 the experimental images, namely “Face” (300 × 300) and “Lamp” (320 × 480), undergo simulations on the Matlab 2019a platform. The computational environment comprises an Intel Core (TM) i7 CPU (2.4 GHz), 8.0 GB RAM, running Windows 10.
Figures 2 and 3 display the simulation outcomes for the original images “Face” and “Lamp,” respectively. The initial keys are set as follows:
Figure 2.
Encryption as well as decryption outcomes of “face.”
Figure 3.
Encryption as well as decryption outcomes of “lamp.”
In Figures 2 and 3, sub-figure (a) depicts the original plain image. Sub-figures (b) and (c) represent the L-image and U-image, derived from the L-matrix and the U-matrix, respectively, referred to as secondary plain images. Sub-figures (d) and (e) illustrate the encrypted versions of these secondary images. Sub-figure (f) shows the decrypted version of the initial image.
In assessing image quality, the peak signal-to-noise ratio1,2 is a prevalent index for evaluating the similarity between the original and processed images. For two images I and K with identical dimensions of , PSNR is derived from the mean square error, formulated as follows:
(18) |
(19) |
where k signifies the bit number per pixel. Generally, a higher PSNR value indicates closer quality of the image.
Another metric, called structural similarity,1,2 assesses the similarity of two statistical variables of the same size. Given variables x and y, SSIM is defined as follows:
(20) |
In the equation, denotes the mean value of represents the variance of denotes the covariance between x and denote constants.
This section calculates the PSNR and SSIM for relevant image pairs, with data summarized in Table 2. P1 and S1 denote PSNR and SSIM between the cipher-texts and secondary plaintexts; P2 and S2 refer to PSNR and SSIM involving the decrypted as well as original plain images, respectively.
Table 2.
PSNR and SSIM involving the plain image as well as the cipher one.
Items | Face | Lamp | The theoretical value | ||
---|---|---|---|---|---|
— | L-image | U-image | L-image | U-image | — |
P1 | 5.8352 | 5.6696 | 5.8337 | 5.9673 | – |
S1 | 0.0023 | 0.0029 | 0.0025 | 0.0038 | 0.0000 |
P2 | +∞ | +∞ | +∞ | ||
S2 | 1.0000 | 1.0000 | 1.0000 |
Table 2 indicates the effective encryption as well as decryption capabilities of the introduced algorithm, affirming its feasibility and efficacy.
Evaluation of algorithm performance
A comprehensive evaluation of the proposed algorithm's security and its resistance to various attacks is conducted in this section, which includes: the size of the key space, the analysis of the key sensitivity and equivalent key sensitivity, the gray histogram and surface of the plain images and cipher images, the analysis of the pixel correlation, the computation of the information entropy (IE), the analysis of the plaintext and cipher-text sensitivity, and the computation and interpolation description of the algorithm's average running time.
Key space
In general, the key space encompasses a range of possible key values, where a larger key space bolsters resistance to brute-force attacks. For 8-bit integer images, a key space exceeding 128 bits is considered secure.
The proposed algorithm's key space encompasses 27 keys. Assuming these keys are double-precision decimals, the key space approximates log210378≈1256 bits. Even when keys are confined to the conservative range of [10−4, 104], the key space remains no less than log210216≈716 bits, offering ample security against exhaustive attacks.
Key sensitivity as well as equivalent key sensitivity
This subsection delves into both the key sensitivity as well as the equivalent key sensitivity of the introduced algorithm, providing an alternative perspective on its resistance to exhaustive attacks.
Key sensitivity
Key sensitivity, an essential metric, evaluates an algorithm's defense against brute-force attacks. It encompasses two dimensions: sensitivity during encryption and sensitivity during decryption. High key sensitivity is demonstrated when two marginally different key sets encrypt the same image and produce significantly distinct cipher images. Conversely, high sensitivity during decryption is evident when slightly varied keys decrypt the same cipher image, resulting in a drastically altered image compared to the original plaintext, the algorithm shows high sensitivity in the decryption process.
Prior to analyzing key sensitivity, several quantitative image comparison indicators are introduced: correlation coefficients (CORR), amount of pixels change rate (NPCR), unified average changing intensity (UACI), as well as block average changing intensity (BACI).1,2 CORR is calculated by:
(21) |
In the equation, denotes the mean value of represents the variance of denotes the covariance between u and v.
For two same-sized images, NPCR indicates the proportion of differing pixels to the total image size, while UACI measures the average absolute difference rate relative to 255 (the greatest difference). NPCR as well as UACI calculations for images and are computed as follows:
(22) |
(23) |
In the equation, m and n denote their respective height and width, while denotes the symbolic function.
BACI, offering a more intricate approach than UACI, measures the differences between two identically sized images on a block-by-block basis. Detailed computation methods for BACI are available in.1,2
Given 27 distinct keys in the key space, three keys are selected for sensitivity testing. Assuming the initial key set is as assigned in Encryption and decryption simulations Section, and the increment for the three keys is , four key groups are chosen:
Key sensitivity in the process of encryption
The four key sets are used to encrypt the L-image and U-image of the “Face” image. The resultant cipher images are revealed in Figure 4. At first glance, discerning differences between the four cipher-texts is challenging. Objective assessment of the cipher images is conducted using indicators such as CORR, SSIM, NPCR, UACI, as well as BACI, with outcomes tabulated in Table 3. The values closely align with theoretical expectations, demonstrating the algorithm's high sensitivity to keys during the encryption process.
-
(ii)
Key sensitivity in the process of decryption
Figure 4.
Cipher-texts generated using four key sets.
Table 3.
Sensitivity analysis of keys in image encryption.
K1-K0 | K2-K0 | K3-K0 | Theoretical values | ||||
---|---|---|---|---|---|---|---|
Image | L-image | U-image | L-image | U-image | L-image | U-image | |
CORR | −0.0037 | 0.0051 | −0.0047 | 0.0038 | 0.0019 | −0.0063 | 0.0000 |
SSIM | 0.0030 | 0.0111 | 0.0022 | 0.0100 | 0.0062 | −2.2149 × 10−4 | 0.0000 |
NPCR | 99.5814% | 99.5925% | 99.6545% | 99.6190% | 99.5836% | 99.6035% | 99.6094% |
UACI | 33.5332% | 33.3723% | 33.5224% | 33.4531% | 33.3474% | 33.6330% | 33.4636% |
BACI | 26.7332% | 26.7738% | 26.6802% | 26.6830% | 26.7167% | 26.8466% | 26.7712% |
CORR: correlation coefficient; NPCR: amount of pixels change rate; UACI: unified average changing intensity; BACI: block average changing intensity.
Use and to decrypt the cipher image with and record the decrypted images as , and , respectively. And calculate CORR, SSIM, and NPCR between and , and , as well as and , respectively. The outcomes, compiled in Table 4, demonstrate the algorithm's sensitivity to key variations during the decryption process.
Table 4.
Sensitivity analysis of keys in image decryption.
P1-P0 | P2-P0 | P3-P0 | Theoretical values | ||||
---|---|---|---|---|---|---|---|
Image | L-image | U-image | L-image | U-image | L-image | U-image | |
CORR | −0.0012 | −0.0017 | −0.0067 | −0.0035 | −0.0017 | −0.0012 | 0.0000 |
SSIM | 0.0055 | 0.0047 | 1.4026 × 10−4 | 0.0031 | 0.0055 | 0.0047 | 0.0000 |
NPCR | 99.5969% | 99.6058% | 99.6323% | 99.6788% | 99.5969% | 99.6085% | 99.6094% |
CORR: correlation coefficient; NPCR: amount of pixels change rate.
Equivalent key sensitivity
The analysis extends beyond key sensitivity to equivalent key sensitivity, crucial in symmetric cryptography security analysis. The equivalent key, typically derived from chaotic systems as a pseudo-random sequence, is fundamental in deciphering cipher-text. In the proposed algorithm, the equivalent keys, and generated by Equation (11) and Equation (3), are critically analyzed.
Analysis of equivalent key sensitivity in the process of encryption
In the selection process of the key set , it is assumed that Equation (11) and Equation (3) independently generate two pseudo-random sequences and . Their combined output is denoted by . Perform the proposed encryption algorithm to encrypt the plain image with to get the cipher image . To incrementally enhance , an additional set of three pseudo-random sequence combinations is introduced, as indicated by:
Subsequently, the plain image undergoes encryption through the same algorithm, utilizing and , yielding the respective cipher images and . Table 5 presents a comparative analysis of the two cipher images, pre- and post-equivalent key alteration. The close approximation of all data to theoretical values suggests a high sensitivity of the image encryption algorithm to changes in equivalent keys during the encryption phase.
-
(ii)
Sensitivity analysis of equivalent keys in decryption
Table 5.
Sensitivity analysis of equivalent keys in image encryption.
C1-C0 | C2-C0 | C3-C0 | Theoretical values | ||||
---|---|---|---|---|---|---|---|
Image | L-image | U-image | L-image | U-image | L-image | U-image | |
CORR | −0.0020 | 0.0058 | 0.0028 | −0.0011 | −0.0107 | −0.0020 | 0.0000 |
SSIM | 0.0044 | 0.0117 | 0.0086 | 0.0036 | −0.0065 | −0.0027 | 0.0000 |
NPCR | 99.5903% | 99.5880% | 99.6013% | 99.6213% | 99.5880% | 99.6213% | 99.6094% |
UACI | 33.4984% | 33.3256% | 33.3864% | 33.5294% | 33.6529% | 33.5294% | 33.4636% |
BACI | 26.6885% | 26.8568% | 26.7264% | 26.8962% | 26.8067% | 26.8163% | 26.7712% |
CORR: correlation coefficient; NPCR: amount of pixels change rate; UACI: unified average changing intensity; BACI: block average changing intensity.
The decryption of the cipher image is performed using , and , resulting in decrypted images , and , respectively. A comparative evaluation of these images, conducted before and after the variation in equivalent keys, is detailed in Table 6. This data substantiates the sensitivity of the proposed algorithm to equivalent keys during the decryption process.
Table 6.
Sensitivity analysis of equivalent keys in image decryption.
P1-P0 | P2-P0 | P3-P0 | Theoretical values | ||||
---|---|---|---|---|---|---|---|
Image | L-image | U-image | L-image | U-image | L-image | U-image | |
CORR | −0.0046 | −0.0030 | −0.0046 | −0.0028 | 3.3794 × 10−4 | 0.0020 | 0.0000 |
SSIM | 0.0036 | 0.0061 | −8.7072 × 10−5 | −6.6045 × 10−5 | 0.0037 | 0.0047 | 0.0000 |
NPCR | 99.6301% | 99.6390% | 99.6168% | 99.5969% | 99.6201% | 99.5925% | 99.6094% |
CORR: correlation coefficient; NPCR: amount of pixels change rate.
The gray histogram and surface
In the realm of image encryption algorithms, those with heightened security and robustness typically exhibit a uniformly distributed gray scale in cipher-texts. This characteristic manifests in the gray histogram and surface of the cipher-text. Figures 5 and 6 display the gray histograms and surfaces for the secondary plaintexts and their corresponding cipher-texts of the experimental image “Lamp.”
Figure 5.
Gray histogram surfaces of plain as well as cipher images.
Figure 6.
Gray surfaces of plain as well as cipher images.
Analysis of Figures 5 and 6 reveals fluctuating histograms and gray surfaces for the secondary plain image, in contrast to the flat and uniform histograms of the cipher-texts. The cipher images’ gray surfaces exhibit a nearly uniform height. These observations confirm the statistical security and robustness of the proposed encryption algorithm.
Pixel correlation
Given that the original unmodified image represents an accurate depiction of the subject, a specific level of correlation among its pixels can be anticipated. This correlation is expected to extend to the pixels of the L and U images, as they are derived from the original image. A crucial indicator of the security and robustness of an encryption algorithm is its ability to disrupt this inherent pixel correlation. Successful encryption would result in a pixel correlation within the ciphered images that is distinctly different from that in the L and U images.
In the context of the experimental Face image, a sampling of 2000 pixels was selected randomly in horizontal, vertical, and diagonal orientations. Subsequently, the CORR for these three sets of pixels were calculated for Gray Surfaces for both L and U images, and their encrypted variants. The findings are systematically presented in Table 7. Moreover, the distribution of pixels in these scenarios is illustrated in Figures 7 and 8.
Table 7.
Correlations between plain as well as cipher images in three directions.
Horizontal | Vertical | Diagonal | Theoretical value | ||
---|---|---|---|---|---|
Secondary plaintext | L-image | 0.1502 | 0.0582 | 0.0651 | — |
U-image | 0.3539 | 0.2611 | 0.2734 | ||
cipher-text | L-image | 0.0233 | 0.8488 × 10−4 | 0.0130 | 0.0000 |
U-image | 0.0195 | 0.0347 | 0.0057 |
Figure 7.
Pixel distributions of L-image and cipher-text for L-image.
Figure 8.
Pixel distributions of U-image and cipher-text for U-image.
Figures 7 and 8 indicate a concentration of pixels in lower triangular areas for L and U plain images, whereas cipher-text pixels display an even distribution in rectangular regions. This suggests a complete alteration of pixel correlation in the plaintexts by the encryption algorithm, a conclusion corroborated by the data in Table 7.
Information entropy
IE measures the randomness and unpredictability in an information system. For an 8-bit integer grayscale image, the maximal IE value is 8. An image encryption algorithm elevating the IE of plaintext towards this maximum is deemed secure and robust. The IE for an 8-bit image is computed as follows:
(24) |
where the occurrence frequency of a pixel with a specified value i is denoted as . 24
In image processing, relative entropy and information redundancy are key metrics for assessing cipher image quality. For 8-bit grayscale image, relative entropy is the score of , and it is also referred to as the image compression rate and has an upper limit of 1. The deviation of this value from 1 is termed information redundancy.
For the experimental images “Face” and “Lamp,” the information entropies, relative entropies, and information redundancy were calculated and are shown in Table 8. The related data of other test images of different sizes including 256 × 256, 512 × 512, and 1024 × 1024 are also listed in Table 8. The outcomes evidently indicate that the information entropies of the cipher-texts surpass those of the plaintexts, closely approaching the value of 8. Furthermore, the relative entropies (compression rates) of the plaintexts are notably less than 1, while those of the cipher-texts are almost 1. This indicates a significant compression potential in the plaintexts, in contrast to the cipher-texts, which exhibit minimal compression possibility. Additionally, the redundancy in each cipher-text is markedly lower than in the corresponding plaintext, approaching nearly 0, indicating minimal information redundancy in the cipher images. These findings lead to the conclusion that the results unequivocally indicate the security and robustness of the proposed algorithm.
Table 8.
Entropies, relative entropy, as well as information redundancy of images of plain and cipher.
Plaintext | Name /Size | IE | Relative entropy | Redundancy | |||
L-image | U-image | L-image | U-image | L-image | U-image | ||
Face(300 × 300) | 4.8440 | 4.8440 | 0.6055 | 0.6055 | 39.45% | 39.45% | |
Lamp(320 × 480) | 4.7528 | 6.7604 | 0.5941 | 0.8451 | 40.59% | 15.50% | |
Bird(256 × 256) | 5.2056 | 5.2801 | 0.7293 | 0.6673 | 58.62% | 41.35% | |
Leopard(512 × 512) | 4.8769 | 6.1125 | 0.6521 | 0.7125 | 33.75% | 31.22% | |
Landscape(1024 × 1024) | 5.3333 | 4.9956 | 0.8032 | 0.7288 | 28.68% | 39.66% | |
Cipher-text | Name/Size | IE | Relative entropy | Redundancy | |||
L-image | U-image | L-image | U-image | L-image | U-image | ||
Face(300 × 300) | 7.9954 | 7.9986 | 0.9994 | 0.9998 | 0.0575% | 0.0175% | |
Lamp(320 × 480) | 7.9959 | 7.9978 | 0.9995 | 0.9997 | 0.0513% | 0.0275% | |
Bird(256 × 256) | 7.9931 | 7.9946 | 0.9994 | 0.9997 | 0.0375% | 0.0032% | |
Leopard(512 × 512) | 7.9968 | 7.9975 | 0.9996 | 0.9998 | 0.0588% | 0.0215% | |
Landscape(1024 × 1024) | 7.9986 | 7.9981 | 0.9999 | 0.9999 | 0.0630% | 0.0344% | |
Theoretical values | 8.0000 | 8.0000 | 1.0000 | 1.0000 | 0.0000 | 0.0000 |
IE: information entropy.
Plaintext sensitivity and cipher-text sensitivity
Plaintext sensitivity
In applied cryptography, plaintext sensitivity refers to the degree of impact that variations in the plaintext have on the cipher-text. This is a crucial measure of an image encryption algorithms defense against differential attacks such as chosen-plaintext attacks. An encryption algorithms sensitivity to plaintext is considered if a minor change in the plaintext outcomes in a significant alteration of the cipher-text, with keys remaining constant.
Assuming a pixel increment in the plaintext is . For the “Face” and “Lamp” images, sensitivity indicators were calculated for the ciphers before and after plaintext variation and recorded in Table 9. The related data of other test images of different sizes including 256 × 256, 512 × 512, and 1024 × 1024 are also listed in Table 9.The data, closely aligned with theoretical values, confirm the proposed algorithm's plaintext sensitivity.
Table 9.
Plaintext sensitivity analysis of the proposed algorithm.
Name /Size | CORR | SSIM | NPCR | UACI | BACI | |||||
---|---|---|---|---|---|---|---|---|---|---|
L-image | U-image | L-image | U-image | L-image | U-image | L-image | U-image | L-image | U-image | |
Face(300 × 300) | −5.1508 × 10−4 | −0.0040 | 0.0050 | 0.0021 | 99.6833% | 99.6279% | 33.4407% | 33.5629% | 26.7558% | 26.8586% |
Lamp(320 × 480) | −9.4383 × 10−4 | −0.0019 | 0.0048 | 0.0040 | 99.6042% | 99.5883% | 33.4522% | 33.5095% | 26.7864% | 26.8023% |
Bird(256 × 256) | −4.8876 × 10−4 | −0.0033 | 0.0031 | 0.0017 | 99.6739% | 99.6231% | 33.4452% | 33.5130% | 26.7428% | 26.8190% |
Leopard (512 × 512) | −3.2145 × 10−4 | −0.0027 | 0.0052 | 0.0049 | 99.6457% | 99.6288% | 33.4538% | 33.5061% | 26.7735% | 26.7821% |
Landscape (1024 × 1024) | −6.1964 × 10−4 | −0.0037 | 0.0063 | 0.0068 | 99.6313% | 99.6137% | 33.4689% | 33.4702% | 26.7653% | 26.7746% |
Theoretical values | 0.0000 | 0.0000 | 99.6094% | 33.4636% | 26.7712% |
CORR: correlation coefficient; NPCR: amount of pixels change rate; UACI: unified average changing intensity; BACI: block average changing intensity.
Cipher-text sensitivity
Cipher-text sensitivity, akin to plaintext sensitivity, assesses the decryption algorithm's capacity to withstand differential attacks like chosen-cipher-text attacks. It gauges the extent to which variations in the cipher-text affect the plaintext. Employing a specific set of keys for encryption and decryption, when a slight alteration in the cipher image leads to a significant change in the plaintext upon decryption, the decryption algorithm is considered sensitive to the cipher image.
Variation in the cipher image was set to . Sensitivity data for the “Face” and “Lamp” images, before and after cipher image alteration, are tabulated in Table 10. The related data of other test images of different sizes including 256 × 256, 512 × 512, and 1024 × 1024 are also listed in Table 10.
Table 10.
Cipher-text sensitivity analysis of the proposed algorithm.
Image size | Image name | CORR | SSIM | NPCR | |||
---|---|---|---|---|---|---|---|
L-image | U-image | L-image | U-image | L-image | U-image | ||
Face | 300 × 300 | −0.0040 | −0.0012 | 0.0024 | 0.0040 | 99.7912% | 99.8076% |
Lamp | 320 × 480 | −0.0020 | 0.0061 | 0.0041 | 0.0122 | 99.7839% | 99.7927% |
Bird | 256 × 256 | 0.0018 | 0.0021 | 0.0015 | 0.0026 | 99.7301% | 99.7569% |
Leopard | 512 × 512 | 0.0035 | 0.0043 | 0.0048 | 0.0076 | 99.6920% | 99.7233% |
Landscape | 1024 × 1024 | 0.0058 | 0.0065 | 0.0075 | 0.0081 | 99.6386% | 99.6549% |
Theoretical values | 0.0000 | 0.0000 | 99.6094% |
CORR: correlation coefficient; NPCR: amount of pixels change rate.
This data substantiates the sensitivity of the proposed decryption algorithm to cipher image changes.
The average running time of the algorithm
This subsection gives the running time evaluation of the proposed algorithm. The computational environment comprises an Intel Core (TM) i7 CPU (2.4 GHz), 8.0 GB RAM, running Windows 10. For images of different sizes, we tested and calculated the average running time of the algorithm. The average times of the algorithm running 100 times are listed in Table 11. Besides, Figure 9 demonstrates the time interpolation curves using the method of piecewise cubic Hermite interpolation polynomial. It is obvious that the algorithm time consumption is quite limited and the proposed algorithm is easy to implement in existing computing environments. It should be noted that, since some random factors are involved in the algorithm and the experimental result is closely related to the computation configuration, the given data is relative and only for reference.
Table 11.
The data of the average times of the algorithm.
Name | Bird | Face | Lamp | Leopard | Landscape |
---|---|---|---|---|---|
Size | 256 × 256 | 300 × 300 | 320 × 480 | 512 × 512 | 1024 × 1024 |
Average encryption time (s) | 0.0838 | 0.0623 | 0.0764 | 0.2633 | 0.5641 |
Average decryption time (s) | 0.0786 | 0.0579 | 0.0688 | 0.2440 | 0.4601 |
Figure 9.
Time interpolation curves of PCHIP. PCHIP: piecewise cubic Hermite interpolation polynomial.
Comparison with other algorithms
This subsection compares the proposed algorithm with those cited in References,7,9, 25and, 26 based on experimental data from the “Lamp” image. It is noteworthy that the comparison involves only a subset of performance indicators. The experimental results are presented in Table 12.
Table 12.
Selected performance information for various algorithms.
Reference 7 | Reference 9 | Reference 25 | Reference 26 | Our algorithm | Theoretical value | ||
---|---|---|---|---|---|---|---|
Key space | 2412 | 210240 | 2128 | 2199 | 21256 | ≥2128 | |
IE | 7.9997 | 7.9996 | 7.9972 | 7.9976 | 7.9982 | 8.0000 | |
Plain sensitivity | NPCR | 99.6118% | 99.7312% | 99.6233% | 99.6218% | 99.5963% | 99.6094% |
UACI | 33.5036% | 33.4226% | 33.6544% | 33.5527% | 33.4809% | 33.4636% | |
Note | The data represents the mean value of two original data sets. |
NPCR: amount of pixels change rate; UACI: unified average changing intensity; IE: information entropy.
For all five algorithms under comparison, as shown in Table 12, all indicators meet the security requirements. Theoretically, this suggests the feasibility and effectiveness of these algorithms. To comprehensively evaluate their quality, identical performance indicators were rated on a scale. The scoring criteria were as follows: 5 points for the best performance, descending to 1 point for the least effective. The scoring outcomes are detailed in Table 13. These scores lead to the conclusion that the proposed algorithm holds local comparative advantages and good overall performance.
Table 13.
Performance data score of different algorithms.
Reference 7 | Reference 9 | Reference 25 | Reference 26 | Our algorithm | ||
---|---|---|---|---|---|---|
Key space | Score | 3 | 5 | 1 | 2 | 4 |
IE | Score | 5 | 4 | 1 | 2 | 3 |
NPCR | Score | 5 | 1 | 2 | 3 | 4 |
UACI | Score | 4 | 3 | 1 | 2 | 5 |
Sum | 17 | 13 | 5 | 9 | 16 |
NPCR: amount of pixels change rate; UACI: unified average changing intensity; IE: information entropy.
Conclusions
Aiming at some unresolved issues in image encryption, such as key space, password generation, security verification and encryption schemes, this paper constructed an encryption algorithm for digital images using 6D-CNN and matrix LU decomposition. The paper comprehensively discussed the mathematical foundation of algorithms, password generation methods, image decomposition encryption schemes, and the system security of the proposed algorithms. The simulation experiment has been conducted in the paper. Compared with some similar existed encryption schemes, the proposed algorithm showed comprehensive advantages, which include a significant reduction in cipher length, a decreased likelihood of key and cipher-text interception during transmission, a substantially large key space. In contrast to traditional image compression encryption, this paper proposed a novel image decomposition encryption mode. This undoubtedly opens up new ideas for enhancing image encryption and transmission security. As for the computational complexity of the algorithm, we will further discuss in the future.
Acknowledgments
This work is supported by the National Natural Science Foundation of China under Grant No. 61702153.
Footnotes
ORCID iD: Xikun Liang https://orcid.org/0000-0001-7280-3062
Authors’ contributions: LT and XL contributed in conception, design, manuscript drafting, and manuscript review. LH contributed in funding acquisition, software, and visualization. BH contributed in validation and resources.
Funding: The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the National Natural Science Foundation of China, (grant number 61702153).
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
References
- 1.Zhang Y. Chaotic Digital Image Cryptosystem. Beijing: Tsinghua university press, 2016. [Google Scholar]
- 2.Guo FM, Tu L. Application of Chaos Theory in Cryptography. Beijing: Beijing Institute of Technology Press, 2015. [Google Scholar]
- 3.Yang Y. Development and future of information hiding in image transformation domain: a literature review. 2022 4th International Conference on Image Processing and Machine Vision, ACM, 2022. [Google Scholar]
- 4.Feng L, Du J, Fu C. Digital image encryption algorithm based on double chaotic map and LSTM. Comput Mat & Continua 2023; 77: 1645–1662. [Google Scholar]
- 5.Ma X, Wang Z, Wang C. An image encryption algorithm based on Tabu search and hyperchaos. Int J Bifurcation Chaos 2024; 34: P2450170 0218–1274. [Google Scholar]
- 6.Lin Y, Yang Y, Li P. Development and future of compression-combined digital image encryption: a literature review. Digit Signal Process 2025; 158: 104908. [Google Scholar]
- 7.Feng W, Yang J, Zhao X, et al. A novel multi-channel image encryption algorithm leveraging pixel reorganization and hyperchaotic maps. Mathematics 2024; 12: P39172227–7390. [Google Scholar]
- 8.Feng W, Zhang J, Chen Y, et al. Exploiting robust quadratic polynomial hyperchaotic map and pixel fusion strategy for efficient image encryption. Expert Syst Appl 2024; 246: P123190 0957–4174. [Google Scholar]
- 9.Raghuvanshi KK, Kumar S, Kumar S, et al. Image encryption algorithm based on DNA encoding and CNN. Expert Syst Appl 2024; 252: 124287. [Google Scholar]
- 10.Rohhila S, Singh AK. Deep learning-based encryption for secure transmission digitalimages: a survey. Comput Electr Eng 2024; 116: 109236. [Google Scholar]
- 11.Ye C, Tan S, Wang J, et al. Social image security with encryption and watermarking in hybrid domains. Entropy 2025; 27: P276 1099–4300. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Yu F, He S, Yao W, et al. Bursting firings in memristive Hopfield neural network with image encryption and hardware implementation. IEEE Trans Comput Aided Des Integr Circuits Syst 2025; P1: 0278–0070. [Google Scholar]
- 13.Yu F, Su D, He S, et al. Resonant tunneling diode cellular neural network with memristor coupling and its application in police forensic digital image protection. Chin Phys B 2025; 34: P050502 1674–1056. [Google Scholar]
- 14.Yu F, Zhang S, Su D, et al. Dynamic analysis and implementation of FPGA for a new 4D fractional-order memristive Hopfield neural network. Fractal Fractional 2025; 9: P115 2504–3110. [Google Scholar]
- 15.Yu F, Tan B, He T, et al. A wide-range adjustable conservative memristive hyperchaotic system with transient quasi-periodic characteristics and encryption application. Mathematics 2025; 13: P726 2227–7390. [Google Scholar]
- 16.Preishuber M, Hütter T, Katzenbeisser S. Depreciating motivation and empirical security analysis of chaos-based image and video encryption. IEEETrans Information Forensics Security 2018; 13: 2137–2150. [Google Scholar]
- 17.Wang X, Xu B, Zhang H. A multi-ary number communicationsystem based on hyperchaotic system of 6th-order cellular neural network. Commun Nonlinear Sci Numer Simul 2010; 15: 124–133. [Google Scholar]
- 18.Chua LO, Yang L. Cellular neural networks: theory. IEEE Trans Circuits Syst 1988; 35: 1257–1272. [Google Scholar]
- 19.Miller JE, Moursund DG, Duris CS. ElementaryTheoryandApplication of Numerical Analysis. New York: Dover Publications, 2011. [Google Scholar]
- 20.Zhao B, Chen M, Zou FS, et al. Proficiency in MATLAB-Science computation and the application of data statitics. Beijing: Posts and Telecommunications Press, 2018. [Google Scholar]
- 21.Dong L, Yao G. Method for generating pseudo random numbers based on cellular neural network. Chinese J Commun 2016; 37: 2016252(1–7). [Google Scholar]
- 22.Rukhin A, Nechvatal J, Smid M, et al. A statistical test suite for random and pseudorandom number generator for cryptographic applications. Special Publication 800-22 Revision 1a. Maryland: National Intitute of Standards and Technology (NIST), 2010. [Google Scholar]
- 23.101_ObjectCategories . The Caltech 101 dataset. http://www.vision.caltech.edu/Image_Datasets/Caltech101/Caltech101.html#Download,2020.
- 24.Wu Y, Zhou Y, Saverriades G, et al. Local Shannon entropy measure with statistical tests for image randomness. Inf. Sci 2013; 222: 323–342. [Google Scholar]
- 25.Wang S, Wang C, Xu C. An image encryption algorithm based on a hidden attractor chaos system and the Knuth–Durstenfeld algorithm. Opt Lasers Eng 2019; 128: 105995. [Google Scholar]
- 26.Zhou K, Fan J, Fan H, et al. Secure image encryption scheme using double random-phase encoding and compressed sensing. Opt Laser Technol 2020; 121: 105769. [Google Scholar]