Skip to main content
Science Progress logoLink to Science Progress
. 2025 Sep 9;108(3):00368504251375171. doi: 10.1177/00368504251375171

Digital image encryption utilizing high-dimensional cellular neural networks and lower-upper triangular decomposition of matrix

Limin Tao 1, Xikun Liang 1,, Lidong Han 1, Bin Hu 1
PMCID: PMC12421057  PMID: 40924586

Abstract

At present, significant progress has been made in the research of image encryption, but there are still some issues that need to be explored in key space, password generation and security verification, encryption schemes, and other aspects. Aiming at this, a digital image encryption algorithm was developed in this paper. This algorithm integrates six-dimensional cellular neural network with generalized chaos to generate pseudo-random numbers to generate the plaintext-related ciphers. The initial image matrix is transformed into L-matrix and U-matrix through Lower-Upper decomposition. These matrices are then encrypted simultaneously with distinct cipher sequences. The algorithm's feasibility and security are demonstrated through comprehensive encryption simulations and performance analysis. The paper's contributions include i) the cellular neural network and an innovative chaos approach to develop a new ciphers scheme; ii) the image decomposition encryption effectively shorten the cipher length and reduce interception risks during transmission; iii) the frequent application of nonlinear transforms enhances the structural complexity of the cryptosystem and fortifies the security of the algorithm. Compared to existing algorithms, the paper achieves a novel image decomposition encryption mode with comprehensive advantages. This mode is expected to be applied in image communication security.

Keywords: Image encryption, security analysis, cellular neural network, LU decomposition, non-linear transforms

Introduction

Digital image encryption serves as a crucial method for protecting image information, it is typically conducted on digital computer systems, involves pixel confusion and diffusion processes.1,2 Supporting techniques in digital image encryption often include chaos, plaintext-related cipher, confusion (scrambling), and diffusion.3,4 Chaos and plaintext-related cipher are utilized to generate cipher sequences, while confusion and diffusion are applied for pixel grayscale conversion and coordinate transformation.5,6 Recent advancements in this field have been noteworthy. Feng et al. 7 presented a novel multi-channel image encryption algorithm, and their algorithm uses pixel reorganization and two robust hyper-chaotic maps to encrypt input images. In another paper, Feng et al. 8 constructed a robust hyper-chaotic map and developed an efficient image encryption algorithm based on the map and a pixel fusion strategy. Recently, Raghuvanshi et al. 9 introduced a novel, more stable, secure, and reliable image encryption model. This scheme combines a convolution neural network model with an intertwining logistic map to generate secret keys. Additionally, DNA encoding, diffusion, and bit reversion operations are applied for scrambling and manipulating image pixels. During the same period, Soniya Rohhila and Amit Kumar Singh 10 studied a comprehensive survey of recent digital image encryption using deep learning models. They discussed various state-of-the-art deep learning-based encryption techniques. Besides, many recent related works1115 have significant research value and deserve attention, discussion, and learning. The research topics of these works include chaotic systems, image encryption schemes, cryptographic analysis, and more.

Currently, significant advancements have been made in the field of digital image information protection. However, certain challenges in this domain cannot be overlooked. These include (i) In 2018, Mario, Thomas, Stefan, et al. highlighted the necessity of ensuring both cipher and algorithm security in image encryption. 16 Regrettably, most existing image encryption algorithms focus solely on algorithm security, neglecting cipher security; (ii) When exposed to malicious attacks, schemes with limited key space or simplistic algorithmic structures are vulnerable; (iii) Conventional one-time-pad image encryption imposes stringent requirements on cipher sequence length. For example, traditional pixel diffusion based on add-and-modulus operations demands the cipher matrix match the size of the plain image. Therefore, cryptosystems should be capable of generating a substantial quantity of pseudo-random numbers. Designing a pseudo-random number generator with optimal statistical properties is challenging. A major issue is the difficulty in controlling the periodic variation of the sequence as the length of the generated pseudo-random sequence increases. Effectively and significantly reducing the length of the pseudo-random sequence remains an urgent problem to resolve; (iv) Almost all known schemes encrypt plaintext as a whole. This approach inherently risks one-time key and cipher-text leakage during information communication. If plain images are encrypted in a decentralized manner, the likelihood of comprehensive information leakage could be significantly diminished.

Presented in the paper is a digital image encryption scheme, underpinned by a six-dimensional cellular neural network (6D-CNN) 17 and augmented by lower-upper (LU) matrix decomposition. The 6D-CNN, integrated with a novel chaotic map, forms a cryptosystem with an extensive key space. The process combines matrix LU decomposition, pixel diffusion, and confusion to achieve image encryption. The primary features of the algorithm are summarized as follows:

  1. The integration of CNN and chaos to generate a pseudo-random cipher sequence;

  2. Applying image decomposition techniques to the field of image encryption, effectively shortening the cipher sequence and reducing the risk of information leakage;

  3. The application of multiple nonlinear tools within the cryptosystem and encryption algorithm to increase structural complexity.

The paper is structured as follows: Algorithm preliminaries Section explains the mathematical underpinnings of the encryption scheme. Generating pseudo-random sequence and cryptosystem Section investigates the structure of the cryptosystem and the mechanism behind cipher formation. Algorithms for encryption as well as decryption Section delves into the intricacies of the image encryption algorithm. Encryption and decryption simulations Section showcases the simulation process of encryption and its results. Evaluation of algorithm performance Section evaluates the performance of the algorithm. The final section, Conclusion Section, provides a conclusion to the paper.

Algorithm preliminaries

In this section, the content focuses on the following four parts: i) The concept of 6D-CNN; ii) The definition of Extended Henon Map (EHM); iii) The method of matrix's LU decomposition and its computer implementation; iv) A new matrix nonlinear transformation and its’ calculation formula. Part 1 and 2 were jointly applied to password generation. Part 3 is the calculation foundation for subsequent image decomposition. Part 4 is served as an auxiliary tool for password generation.

Six-dimensional cellular neural network

The 6D-CNN, a continuous hyper-chaotic system, was introduced by Wang, Bing, and Zhang in 2010, building on the traditional CNN model 18 developed by Chua and Yang. The 6D-CNN is expressed as follows:

{x1=x3x4x2=2x2+x3x3=14x1x2x4=100x1100x4+100(|x4+1||x41|)x5=x1+18x2x5x6=100x2+4x54x6 (1)

As the number of iterations approaches infinity, the system's six Lyapunov exponents include two positive exponents λ1=2.7481 and λ2=1.2411 , thereby fulfilling the hyper-chaos sufficient conditions. 18

A novel chaotic map

The Henon map, commonly used to generate random sequences, is a well-known example of chaos.1,2 The Henon map, in its recursive form, is defined as follows:

{xk+1=1αxk2+ykyk+1=βxk (2)

where 0.54<α<2,0<|β|<1 are the free parameters. To enhance the complexity of Equation (2), it has been extended into a new form:

{xk+1=1αxk2+βxkyk+γxk+λykyk+1=μxk (3)

where 1α<2,0<|β|,|γ|,|λ|,|μ|<1 are additional parameters. This modified version, denoted as the EHM in Equation (3), becomes chaotic as the number of iterations approaches infinity, as indicated by its positive Lyapunov exponent. For example, when α=1.5,β,γ,λ,μ=0.5,x0=1.1, it is not hard to get the Lyapunov exponent θ=3.43>0. Numerical experiments show that as long as the above parameters vary within the specified range, the corresponding exponents are always positive. This indicates that this Equation (3) satisfies the necessary conditions for chaotic systems. In fact, through spectral analysis, it can be rigorously proven that this equation forms a hyper-chaotic system. Due to space limitations, this section will not elaborate further.

LU decomposition of matrix and rearrangement of LU matrices

Matrix decomposition involves converting a given matrix into a product of matrices according to specific rules. Common matrix decomposition methods include LU decomposition, QR decomposition, Schur decomposition, etc. Some of these are suitable for square matrices, while others can be applied to rectangular matrices. 19 This subsection introduces the LU decomposition of rectangular matrices and a rearrangement scheme for LU matrices, laying the groundwork for subsequent encryption of images.

For illustrative purposes, suppose that A=(aij)m×n is a matrix of specified dimensions, comprising m×n(mn) :

A=[a11a12a1na21a22a2nam1am2amn]m×n (4)

If matrix A has a generalized inverse, then it satisfies:

PA=LU,A=P1LU (5)

where P is a matrix of permutation, L denotes a lower triangular matrix of m×n , and U denotes an upper triangular matrix connected with n×n . The expansions of L and U are given by 19 :

L=[l11000l21l22000ln1ln2lnnlm1lm1lmn]m×nU=[u11u12u1n0u22u2n000unn]n×n (6)

A significant portion of elements in the L-matrix and U-matrix are zeros, indicating their sparse nature. In matrix operations, excluding these zero elements from calculations can save computational resources and enhance information processing efficiency.

Let x=n(2mn+1)/2 represent the total number of elements in the lower triangle of L. If we use p,q to denote the two factors of x with the smallest difference of absolute values, then the rearrangement result of L-matrix is unique. Similarly, for y=n(n+1)/2=rs , the total number of elements in the upper triangle of U, and we use r,s to denote the two factors of y with the smallest difference of absolute values. Consequently, the nonzero triangular parts of L and U are rearranged by columns into new forms:

LRe=[b11b12b1qb21b22b2qbp1bp2bpq]p×qURe=[c11c12c1sc21c22c2scr1cr2crs]r×s (7)

In subsequent sections, the matrices in Equation (7) are referred to as L-image and U-image, respectively.

In practice, LU decomposition is often applied to square matrices. In this case, the size of the matrix A is n×n . Moreover, the size of the matrices P,L,U in Equation 5 correspondingly becomes n×n . In Matlab, the internal function “LU” can quickly implement LU decomposition of specified matrices.

For example, if

A=[108336540168638637212],

calling the function “LU” in Matlab, then A can be decomposed into A=P1LU , where,

L=[1.00000.00000.00000.37041.00000.00000.79630.06881.0000],U=[10833650155.777838.925900.157.5615],P=[100010001].

The idea of using LU decomposition of matrices to implement digital image encryption is to encrypt the matrices L and U obtained by decomposing the original pixels matrix. This is a typical indirect encryption method. The detailed encryption and decryption algorithms and steps will be discussed in the following text. Due to the sparse nature of matrix P, it is ignored in encryption algorithm.

Matrix nonlinear transform

Suppose that P=(pij)h×k is a matrix of specified dimensions, comprising: h×k . This subsection defines a nonlinear transform of the matrix (NTM) and P is expressed as follows:

P~=f(P)=aP.^b+cR (8)

where .^ denotes the dot power operator, 20 a,b,cR+ are free parameters, and R is a nonzero random matrix of the same size as P. The inverse of NTM (NTM−1) is easily deduced:

P=f1-1(P)=((P~cR)/a).^(1/b) (9)

Generating pseudo-random sequence and cryptosystem

The process for generating a pseudo-random sequence is crucial in a cryptosystem. Despite significant achievements in this area, 21 the development of more secure pseudo-random number generators remains a pressing need in applied cryptography.

This section introduces a novel pseudo-random sequence generation mechanism based on 6D-CNN and EHM. To prevent chaos degradation due to finite precision, the outputs of 6D-CNN are numerically processed for the subsequent iteration. A nonlinear transform and a weighted combination of two pseudo-random sequences generated by 6D-CNN and EHM are used to create new pseudo-random numbers.

As per Equation (1), the 6D-CNN is described by a continuous first-order differential equation. For smooth chaotic sequence generation, the continuous form of 6D-CNN is discretized. Common discretization methods for first-order differential equations include the Runge–Kutta method, the Euler method, and the improved Euler method. The Euler method is employed here for simplicity.

{x1k+1=x1k+h(x3kx4k)x2k+1=x2k+h(2x2k+x3k)x3k+1=x3k+1+h(14x1kx2k)x4k+1=x4k+h(100x1k100x4k+100|x4k+1|100|x4k1|)x5k+1=x1k+h(18x2kx5k)x6k+1=100x2k+h(4x5k4x6k) (10)

where h is the step length of each iteration.

Generating pseudo-random sequence

The mechanism to generate the pseudo-random sequence is as follows:

Initialize parameters x10,x20,x30,x40,x50,x60,h and use Equation (10) to generate a chaotic sequence. Apply the following transform to the outputs of k-th iteration:

x~j(k)=(d(x(j)mod6(k))+d(x(j+1)mod6(k))+d(x(j+2)mod6(k))+d(x(j+3)mod6(k)))/4,j=1,2,,6. (11)

where d() denotes the decimal part of () , and x~j(k) is the input of the (k+1)-th iteration. Continue until the sequence length is sufficient. This sequence is denoted as X={xi},i=1,2,,m.

Set the parameters α,β,γ,λ,μ,x0,y0 . Make use of Equation (3) to deduce another chaotic sequence of the same size as X and write it down as Y={yi},i=1,2,,m. To eliminate the initial values effect, the first 1000 iteration values of Equation (10) are neglected. This means that X is assigned from the 1001st iteration.

Then, implement the transforms expressed in Equation (8) on the sequences X and Y, respectively, as follows:

X^=a1X.^b1+c1R1Y^=a2Y.^b2+c2R2, (12)

where a1,b1,c1,a2,b2,c2R+ denote free constants; R1,R2 are nonzero random matrices. As mentioned in Equation (18), the symbol X.^a denotes the dot power operator of the matrix, which means to perform a a -power operation on each element of the matrix X.

Next, convert the outcomes of Equation (12) to integer data:

X¯=(floor(X^×215))mod(256)Y¯=(floor(Y^×215))mod(256), (13)

where floor() is the downward rounding function, and mod() is the modulus operator.

Finally, combine sequence X¯ and Y¯ with weights:

Prn=ωX¯+(1-ω)Y¯, (14)

where 0ω1 is the weight. The sequence Prn is the final pseudo-random sequence. In the following sections, Prn is called the Pseudo-Random Sequence based on 6D-CNN and Chaos (PRSCC).

The key space of the cryptosystem

From Generating pseudo-random sequence Section, it is evident that 21 free parameters ( x10,x20,x30,x40,x50,x60,h,α,β,γ,λ,μ,x0,y0,a1,b1,c1,a2,b2,c2 and ω ) are involved in PRSCC's generation. Assuming these parameters are 8-bit integers, the cryptosystem's key space reaches 2^168, far exceeding the safety standard of 2128. Thus, the cryptosystem is safeguarded against brute-force attacks.

National Institute of Standards and Technology randomness testing

SP800-22 R1a is an international standard of the randomness statistical testing for binary sequences. It is issued by National Institute of Standards and Technology. 22 In SP800-22, there are 15 testing items, and part of them includes several sub-items. The test outcomes of each item include two indicators: p-value and proportion. In general, the testing is performed for the known significance level α[0.001,0.01] and the confidence interval [1α3α(1α)/β,1α+3α(1α)/β] , where β is the number of testing groups. If the p-value is greater than the significance level α and the value of proportion is in the confidence interval, then the sequence is affirmed to pass the test. Suppose that the significant level is α=0.01 and the confidence interval is [0.9833,0.9967] .1,16 For the 2000 PRSCCs in the length of 6×105 bit, we calculate the p-values and their proportions in the confidence interval. The outcomes are displayed in Table 1. It is observed that all p-values exceed 0.01, with proportions consistently falling within the range of [0.9833, 0.9967]. Hence, we confirm that PRSCC is random.

Table 1.

SP800-22 R1a testing of PRSCCs.

Number Index Criterion P-values Proportion Conclusions
1 Frequency analysis ≥0.01 0.3558 0.9894 Passed
2 Block frequency evaluation ≥0.01 0.4706 0.9858 Passed
3 Run test ≥0.01 0.7780 0.9909 Passed
4 Analysis of maximum consecutive ones in a block ≥0.01 0.1439 0.9854 Passed
5 Rank of binary matrices ≥0.01 0.3744 0.9928 Passed
6 Application of discrete Fourier transform ≥0.01 0.5814 0.9939 Passed
7 Matching with Non-overlapping templates ≥0.01 0.9644 0.9861 Passed
8 Matching with overlapping templates ≥0.01 0.6889 0.9878 Passed
9 Maurer's universal statistical test ≥0.01 0.2038 0.9915 Passed
10 Assessment of linear complexity ≥0.01 0.8089 0.9937 Passed
11 Serial test all ≥ 0.01 *0.0238 0.9858 Passed
12 Estimation of approximate entropy ≥0.01 0.7154 0.9962 Passed
13 Calculation of cumulative sums ≥0.01 0.1709 0.9887 Passed
14 Assessment of random excursions all ≥ 0.01 *0.0300 0.9846 Passed
15 Variants of random excursions analysis all ≥ 0.01 *0.2095 0.9853 Passed

Notes: (i) Due to random factors in the calculations, outcomes may vary between rounds; (ii) An asterisk () indicates the minimum data for the respective item.

PRSCC: Pseudo-Random Sequence based on 6D-CNN and Chaos.

Algorithms for encryption as well as decryption

This section consists of the following parts: i) Image encryption scheme and its mathematical description, which includes the generation of ciphers related to plaintext, pixel diffusion and pixel scrambling; ii) Image decryption scheme and its mathematical description, it is the inverse course of image encryption; iii) The step-by-step description of image encryption and decryption algorithm; iv) The detailed diagram of encryption and decryption scheme; v) The overview of the main features of the proposed algorithms.

Encryption

Diffusion cipher related to plaintext and pixel diffusion

In order to generate a diffusion cipher linked to plaintext, the transformation defined in Equation (8) is applied between Pm and LRe , and also between Pn and URe , as follows:

CLd=a3Pm.^b3+c3LReCUd=a4Pn.^b4+c4URe (15)

where Pm and Pn are the PRSCCs of lengths m and n , respectively. The outcomes CLd and CUd serve as the diffusion ciphers.

Traditional image encryption often employs basic diffusion schemes like pixel XOR or addition-and-modulus operations. 2 These operations, applied to single pixels, result in low computational efficiency. This paper introduces an XOR operation performed row by row, and also column by column:

Cir=Ci1rxorSirxorPir,i=1,2,p,Cjc=Cj1cxorSjcxorPjc,j=1,2,q. (16)

where Cr,Sr,Pr are the row vectors of the cipher-text, cipher, and plaintext, and Cc,Sc,Pc are the column vectors of the cipher-text, cipher, and plaintext, respectively. For matrices LRe and URe , cipher-texts are computed as per Equation (16) to achieve pixel diffusion, with outcomes denoted as Ld and Ud .

The inverse diffusion process corresponding to Equation (16) is executed as follows:

Cjc=Cj+1cxorSjcxorPjc,j=q,q11,Cir=Ci+1rxorSirxorPir,i=p,p1,1. (17)

Pixel scrambling

Using Matlab's “randperm” function, two random positive integer sequences Rsm and Rsn within the intervals [1,m] and [1,n] are generated. Based on these sequences, the diffused images Ld and Ud are rearranged to form Ls and Us , respectively. These images Ls and Us are the final encrypted images.

Decryption

Decryption essentially involves reversing the encryption operations on the cipher image. Firstly, the cipher images Ls and Us are rearranged based on the ascending order of the sequences Rsm and Rsn . This step retrieves the scrambled, decrypted images LDs and UDs . Next, applying Equation (17) retrieves the diffused decrypted images LDd and UDd . Finally, these matrices are converted into triangular matrices LDt and UDt as per Equation (6), and reconstructing the matrix A the plain image is reconstructed using Equation (5).

Algorithm description

  1. Image decomposition

    Step 1. Convert the original image to a grayscale matrix A.

    Step 2. Decompose the matrix A using Matlab into L and U matrices.

    Step 3. Transform L and U matrices into secondary plain images LRe and URe , respectively.

  2. Image encryption

    Step 4. Generate the pseudo-random sequence Pm as described in Generating pseudo-random sequence Section.

    Step 5. Apply transformations in Equation (15) to compute diffusion ciphers CLd and CUd .

    Step 6. Utilize Equation (16) to compute diffused images Ld and Ud ;.

    Step7. Rearrange Ld and Ud into Ls and Us , respectively, using the method in Pixel scrambling.

  3. Image decryption and reconstruction

    Step 8. Rearrange cipher-texts LDd and UDd to obtain scrambled decrypted images.

    Step 9. Use Equation (17) to obtain diffused decrypted images C_1′ and C_2′.

    Step10. Convert LDd and UDd into upper and lower triangular matrices LDt and UDt , respectively.

    Step 10. Recompose the plain image as per the formula A=P1LDtUDt .

Algorithm flow chart

The algorithm flow chart is illustrated in Figure 1.

Figure 1.

Figure 1.

The flow chart of the proposed algorithm.

Characteristics of the algorithm

The proposed algorithm exhibits its distinct features primarily in two aspects. First, a composite cryptosystem and cipher generation mechanism have been constructed based on high-dimensional CNNs and multiple degrees of freedom in chaos. This structure ensures the security of pseudo-random sequences and stream ciphers effectively. Second, the encryption process is applied not directly to the original plain image but to two derivative images, namely L-image and U-image. These images result from the rearrangement of L and U matrices, which are they decomposed from the plain image.

Advantages of these features include:

  • - Substantial reduction in cipher sequence length: For an image of size m×n , the cipher sequence length required for direct encryption is also m×n . In contrast, the proposed algorithm requires a cipher length of either n(2mn+1)/2 or n(n+1)/2 (refer to LU decomposition of matrix and rearrangement of LU matrices Section). When mn2 , m×n is significantly larger than n(2mn+1)/2 , and it is also bigger than n(n+1)/2 ). For instance, when m=n=256 , the length of the direct encryption cipher is 256×256 . However, the indirect encryption cipher length is approximately 128×257 , roughly half of the direct encryption. Such a reduction in cipher length is notably beneficial in applied cryptography, where the challenge lies in generating a substantial volume of pseudo-random numbers with desirable statistical characteristics for a diffusion cipher linked to plaintext.

  • - Decreased computational load in decomposition encryption: The size of L-image or U-image is significantly smaller than the initial image, thereby reducing the computational demand. In traditional pixel diffusion schemes, the number of diffusion operations approximates the cipher length. This algorithm effectively halves the computational requirements.

  • - Reduced risk of interception during transmission: In public key cryptosystems, keys and cipher-texts are transmitted separately across two channels. Suppose that the probability of information being intercepted is ε[0,1] in once transmission. Under normal conditions, the parameter ε should be a positive decimal very close to 0. Thus, in direct encryption, the probability of the keys and cipher-text being intercepted simultaneously is ε2 . However, in our encryption scheme, two groups of keys and two cipher-texts are, respectively, transmitted in four different channels. Therefore, the corresponding probability is ε4 . Since ε4<<ε2 , this approach significantly lowers the overall risk of information leakage.

Encryption and decryption simulations

In this section, to verify the feasibility and effectiveness of the proposed algorithms, the simulations of encryption and decryption algorithms will be conducted in a specific experimental environment. The experimental images are selected from the international standard database for image processing.

Selected from the Caltech 101 universal image dataset, 23 the experimental images, namely “Face” (300 × 300) and “Lamp” (320 × 480), undergo simulations on the Matlab 2019a platform. The computational environment comprises an Intel Core (TM) i7 CPU (2.4 GHz), 8.0 GB RAM, running Windows 10.

Figures 2 and 3 display the simulation outcomes for the original images “Face” and “Lamp,” respectively. The initial keys are set as follows:

x10=1.1,x20=2.2,x30=3.3,x40=4.4,x50=5.5,x60=6.6,h=0.1α=1.20,β=0.50,γ=0.78,λ=0.30,μ=0.40,x0=0,y0=0a1=3.00,b1=2.00,c1=4.00,a2=5.00,b2=4.00,c2=3.00,ω=0.5,a3=4.00,b3=2.00,c3=2.00,a4=4.00,b4=2.00,c4=2.00.

Figure 2.

Figure 2.

Encryption as well as decryption outcomes of “face.”

Figure 3.

Figure 3.

Encryption as well as decryption outcomes of “lamp.”

In Figures 2 and 3, sub-figure (a) depicts the original plain image. Sub-figures (b) and (c) represent the L-image and U-image, derived from the L-matrix and the U-matrix, respectively, referred to as secondary plain images. Sub-figures (d) and (e) illustrate the encrypted versions of these secondary images. Sub-figure (f) shows the decrypted version of the initial image.

In assessing image quality, the peak signal-to-noise ratio1,2 is a prevalent index for evaluating the similarity between the original and processed images. For two images I and K with identical dimensions of m×n , PSNR is derived from the mean square error, formulated as follows:

MSE=1mni=0m1j=0n1[I(i,j)K(i,j)]2 (18)
PSNR=10*log10((2k1)2MSE) (19)

where k signifies the bit number per pixel. Generally, a higher PSNR value indicates closer quality of the image.

Another metric, called structural similarity,1,2 assesses the similarity of two statistical variables of the same size. Given variables x and y, SSIM is defined as follows:

SSIM(x,y)=(2μxμy+c1)(2σxy+c2)(μx2+μy2+c1)(σx2+σy2+c2) (20)

In the equation, μx denotes the mean value of x,σx2 represents the variance of x,σxy denotes the covariance between x and y;c1,c2 denote constants.

This section calculates the PSNR and SSIM for relevant image pairs, with data summarized in Table 2. P1 and S1 denote PSNR and SSIM between the cipher-texts and secondary plaintexts; P2 and S2 refer to PSNR and SSIM involving the decrypted as well as original plain images, respectively.

Table 2.

PSNR and SSIM involving the plain image as well as the cipher one.

Items Face Lamp The theoretical value
L-image U-image L-image U-image
P1 5.8352 5.6696 5.8337 5.9673
S1 0.0023 0.0029 0.0025 0.0038 0.0000
P2 +∞ +∞ +∞
S2 1.0000 1.0000 1.0000

Table 2 indicates the effective encryption as well as decryption capabilities of the introduced algorithm, affirming its feasibility and efficacy.

Evaluation of algorithm performance

A comprehensive evaluation of the proposed algorithm's security and its resistance to various attacks is conducted in this section, which includes: the size of the key space, the analysis of the key sensitivity and equivalent key sensitivity, the gray histogram and surface of the plain images and cipher images, the analysis of the pixel correlation, the computation of the information entropy (IE), the analysis of the plaintext and cipher-text sensitivity, and the computation and interpolation description of the algorithm's average running time.

Key space

In general, the key space encompasses a range of possible key values, where a larger key space bolsters resistance to brute-force attacks. For 8-bit integer images, a key space exceeding 128 bits is considered secure.

The proposed algorithm's key space encompasses 27 keys. Assuming these keys are double-precision decimals, the key space approximates log210378≈1256 bits. Even when keys are confined to the conservative range of [10−4, 104], the key space remains no less than log210216≈716 bits, offering ample security against exhaustive attacks.

Key sensitivity as well as equivalent key sensitivity

This subsection delves into both the key sensitivity as well as the equivalent key sensitivity of the introduced algorithm, providing an alternative perspective on its resistance to exhaustive attacks.

Key sensitivity

Key sensitivity, an essential metric, evaluates an algorithm's defense against brute-force attacks. It encompasses two dimensions: sensitivity during encryption and sensitivity during decryption. High key sensitivity is demonstrated when two marginally different key sets encrypt the same image and produce significantly distinct cipher images. Conversely, high sensitivity during decryption is evident when slightly varied keys decrypt the same cipher image, resulting in a drastically altered image compared to the original plaintext, the algorithm shows high sensitivity in the decryption process.

Prior to analyzing key sensitivity, several quantitative image comparison indicators are introduced: correlation coefficients (CORR), amount of pixels change rate (NPCR), unified average changing intensity (UACI), as well as block average changing intensity (BACI).1,2 CORR is calculated by:

Corru,v=cov(u,v)D(u)D(v)=(i=1n(uiE(u))(viE(v)))/n(i=1n(uiE(u))2)/n(i=1n(viE(v))2)/n (21)

In the equation, u={ui},v={vi},i=1,2,,n,E(u) denotes the mean value of u,D(u) represents the variance of u,cov(u,v) denotes the covariance between u and v.

For two same-sized images, NPCR indicates the proportion of differing pixels to the total image size, while UACI measures the average absolute difference rate relative to 255 (the greatest difference). NPCR as well as UACI calculations for images C1 and C2 are computed as follows:

NPCR(C1,C2)=1mni=1mj=1n|sign(C1(i,j)C2(i,j))|×100% (22)
UACI(C1,C2)=1mni=1mj=1n|C1(i,j)C2(i,j)|2550×100% (23)

In the equation, m and n denote their respective height and width, while sign() denotes the symbolic function.

BACI, offering a more intricate approach than UACI, measures the differences between two identically sized images on a block-by-block basis. Detailed computation methods for BACI are available in.1,2

Given 27 distinct keys in the key space, three keys x10,λ,c3 are selected for sensitivity testing. Assuming the initial key set K0 is as assigned in Encryption and decryption simulations Section, and the increment for the three keys is 1012 , four key groups are chosen:

K0=(x10,x20,x30,x40,x50,x60,h,α,β,γ,λ,μ,x0,y0,a1,b1,c1,a2,b2,c2,ω,a3,b3,c3,a4,b4,c4);K1=(x10+1012,x20,x30,x40,x50,x60,h,α,β,γ,λ,μ,x0,y0,a1,b1,c1,a2,b2,c2,ω,a3,b3,c3,a4,b4,c4);K2=(x10,x20,x30,x40,x50,x60,h,α,β,γ,λ+1012,μ,x0,y0,a1,b1,c1,a2,b2,c2,ω,a3,b3,c3,a4,b4,c4);K3=(x10,x20,x30,x40,x50,x60,h,α,β,γ,λ,μ,x0,y0,a1,b1,c1,a2,b2,c2,ω,a3,b3,c3+1012,a4,b4,c4).
  1. Key sensitivity in the process of encryption

The four key sets are used to encrypt the L-image and U-image of the “Face” image. The resultant cipher images are revealed in Figure 4. At first glance, discerning differences between the four cipher-texts is challenging. Objective assessment of the cipher images is conducted using indicators such as CORR, SSIM, NPCR, UACI, as well as BACI, with outcomes tabulated in Table 3. The values closely align with theoretical expectations, demonstrating the algorithm's high sensitivity to keys during the encryption process.

  • (ii)

    Key sensitivity in the process of decryption

Figure 4.

Figure 4.

Cipher-texts generated using four key sets.

Table 3.

Sensitivity analysis of keys in image encryption.

K1-K0 K2-K0 K3-K0 Theoretical values
Image L-image U-image L-image U-image L-image U-image
CORR −0.0037 0.0051 −0.0047 0.0038 0.0019 −0.0063 0.0000
SSIM 0.0030 0.0111 0.0022 0.0100 0.0062 −2.2149 × 10−4 0.0000
NPCR 99.5814% 99.5925% 99.6545% 99.6190% 99.5836% 99.6035% 99.6094%
UACI 33.5332% 33.3723% 33.5224% 33.4531% 33.3474% 33.6330% 33.4636%
BACI 26.7332% 26.7738% 26.6802% 26.6830% 26.7167% 26.8466% 26.7712%

CORR: correlation coefficient; NPCR: amount of pixels change rate; UACI: unified average changing intensity; BACI: block average changing intensity.

Use K0,K1,K2 and K3 to decrypt the cipher image with K0 and record the decrypted images as P0,P1,P2 , and P3 , respectively. And calculate CORR, SSIM, and NPCR between P0 and P1 , P0 and P2 , as well as P0 and P3 , respectively. The outcomes, compiled in Table 4, demonstrate the algorithm's sensitivity to key variations during the decryption process.

Table 4.

Sensitivity analysis of keys in image decryption.

P1-P0 P2-P0 P3-P0 Theoretical values
Image L-image U-image L-image U-image L-image U-image
CORR −0.0012 −0.0017 −0.0067 −0.0035 −0.0017 −0.0012 0.0000
SSIM 0.0055 0.0047 1.4026 × 10−4 0.0031 0.0055 0.0047 0.0000
NPCR 99.5969% 99.6058% 99.6323% 99.6788% 99.5969% 99.6085% 99.6094%

CORR: correlation coefficient; NPCR: amount of pixels change rate.

Equivalent key sensitivity

The analysis extends beyond key sensitivity to equivalent key sensitivity, crucial in symmetric cryptography security analysis. The equivalent key, typically derived from chaotic systems as a pseudo-random sequence, is fundamental in deciphering cipher-text. In the proposed algorithm, the equivalent keys, X={xi},i=1,2,,m and Y={yi},i=1,2,,m generated by Equation (11) and Equation (3), are critically analyzed.

  1. Analysis of equivalent key sensitivity in the process of encryption

In the selection process of the key set K0 , it is assumed that Equation (11) and Equation (3) independently generate two pseudo-random sequences X0 and Y0 . Their combined output is denoted by S0 . Perform the proposed encryption algorithm to encrypt the plain image P0 with S0 to get the cipher image C0 . To incrementally enhance c=10^(12) , an additional set of three pseudo-random sequence combinations is introduced, as indicated by:

S1:{X0+c,Y0},S2:{X0,Y0+c},S3:{X0+c,Y0+c}.

Subsequently, the plain image P0 undergoes encryption through the same algorithm, utilizing S1,S2 and S3 , yielding the respective cipher images C1,C2 and C3 . Table 5 presents a comparative analysis of the two cipher images, pre- and post-equivalent key alteration. The close approximation of all data to theoretical values suggests a high sensitivity of the image encryption algorithm to changes in equivalent keys during the encryption phase.

  • (ii)

    Sensitivity analysis of equivalent keys in decryption

Table 5.

Sensitivity analysis of equivalent keys in image encryption.

C1-C0 C2-C0 C3-C0 Theoretical values
Image L-image U-image L-image U-image L-image U-image
CORR −0.0020 0.0058 0.0028 −0.0011 −0.0107 −0.0020 0.0000
SSIM 0.0044 0.0117 0.0086 0.0036 −0.0065 −0.0027 0.0000
NPCR 99.5903% 99.5880% 99.6013% 99.6213% 99.5880% 99.6213% 99.6094%
UACI 33.4984% 33.3256% 33.3864% 33.5294% 33.6529% 33.5294% 33.4636%
BACI 26.6885% 26.8568% 26.7264% 26.8962% 26.8067% 26.8163% 26.7712%

CORR: correlation coefficient; NPCR: amount of pixels change rate; UACI: unified average changing intensity; BACI: block average changing intensity.

The decryption of the cipher image is performed using S0,S1,S2 , and S3 , resulting in decrypted images P0,P1,P2 , and P3 , respectively. A comparative evaluation of these images, conducted before and after the variation in equivalent keys, is detailed in Table 6. This data substantiates the sensitivity of the proposed algorithm to equivalent keys during the decryption process.

Table 6.

Sensitivity analysis of equivalent keys in image decryption.

P1-P0 P2-P0 P3-P0 Theoretical values
Image L-image U-image L-image U-image L-image U-image
CORR −0.0046 −0.0030 −0.0046 −0.0028 3.3794  × 10−4 0.0020 0.0000
SSIM 0.0036 0.0061 −8.7072  × 10−5 −6.6045  × 10−5 0.0037 0.0047 0.0000
NPCR 99.6301% 99.6390% 99.6168% 99.5969% 99.6201% 99.5925% 99.6094%

CORR: correlation coefficient; NPCR: amount of pixels change rate.

The gray histogram and surface

In the realm of image encryption algorithms, those with heightened security and robustness typically exhibit a uniformly distributed gray scale in cipher-texts. This characteristic manifests in the gray histogram and surface of the cipher-text. Figures 5 and 6 display the gray histograms and surfaces for the secondary plaintexts and their corresponding cipher-texts of the experimental image “Lamp.”

Figure 5.

Figure 5.

Gray histogram surfaces of plain as well as cipher images.

Figure 6.

Figure 6.

Gray surfaces of plain as well as cipher images.

Analysis of Figures 5 and 6 reveals fluctuating histograms and gray surfaces for the secondary plain image, in contrast to the flat and uniform histograms of the cipher-texts. The cipher images’ gray surfaces exhibit a nearly uniform height. These observations confirm the statistical security and robustness of the proposed encryption algorithm.

Pixel correlation

Given that the original unmodified image represents an accurate depiction of the subject, a specific level of correlation among its pixels can be anticipated. This correlation is expected to extend to the pixels of the L and U images, as they are derived from the original image. A crucial indicator of the security and robustness of an encryption algorithm is its ability to disrupt this inherent pixel correlation. Successful encryption would result in a pixel correlation within the ciphered images that is distinctly different from that in the L and U images.

In the context of the experimental Face image, a sampling of 2000 pixels was selected randomly in horizontal, vertical, and diagonal orientations. Subsequently, the CORR for these three sets of pixels were calculated for Gray Surfaces for both L and U images, and their encrypted variants. The findings are systematically presented in Table 7. Moreover, the distribution of pixels in these scenarios is illustrated in Figures 7 and 8.

Table 7.

Correlations between plain as well as cipher images in three directions.

Horizontal Vertical Diagonal Theoretical value
Secondary plaintext L-image 0.1502 0.0582 0.0651
U-image 0.3539 0.2611 0.2734
cipher-text L-image 0.0233 0.8488 × 10−4 0.0130 0.0000
U-image 0.0195 0.0347 0.0057

Figure 7.

Figure 7.

Pixel distributions of L-image and cipher-text for L-image.

Figure 8.

Figure 8.

Pixel distributions of U-image and cipher-text for U-image.

Figures 7 and 8 indicate a concentration of pixels in lower triangular areas for L and U plain images, whereas cipher-text pixels display an even distribution in rectangular regions. This suggests a complete alteration of pixel correlation in the plaintexts by the encryption algorithm, a conclusion corroborated by the data in Table 7.

Information entropy

IE measures the randomness and unpredictability in an information system. For an 8-bit integer grayscale image, the maximal IE value is 8. An image encryption algorithm elevating the IE of plaintext towards this maximum is deemed secure and robust. The IE for an 8-bit image is computed as follows:

IE=j=0255xilog2xi (24)

where the occurrence frequency of a pixel with a specified value i is denoted as xi . 24

In image processing, relative entropy and information redundancy are key metrics for assessing cipher image quality. For 8-bit grayscale image, relative entropy is the score of IE/8 , and it is also referred to as the image compression rate and has an upper limit of 1. The deviation of this value from 1 is termed information redundancy.

For the experimental images “Face” and “Lamp,” the information entropies, relative entropies, and information redundancy were calculated and are shown in Table 8. The related data of other test images of different sizes including 256 × 256, 512 × 512, and 1024 × 1024 are also listed in Table 8. The outcomes evidently indicate that the information entropies of the cipher-texts surpass those of the plaintexts, closely approaching the value of 8. Furthermore, the relative entropies (compression rates) of the plaintexts are notably less than 1, while those of the cipher-texts are almost 1. This indicates a significant compression potential in the plaintexts, in contrast to the cipher-texts, which exhibit minimal compression possibility. Additionally, the redundancy in each cipher-text is markedly lower than in the corresponding plaintext, approaching nearly 0, indicating minimal information redundancy in the cipher images. These findings lead to the conclusion that the results unequivocally indicate the security and robustness of the proposed algorithm.

Table 8.

Entropies, relative entropy, as well as information redundancy of images of plain and cipher.

Plaintext Name /Size IE Relative entropy Redundancy
L-image U-image L-image U-image L-image U-image
Face(300 × 300) 4.8440 4.8440 0.6055 0.6055 39.45% 39.45%
Lamp(320 × 480) 4.7528 6.7604 0.5941 0.8451 40.59% 15.50%
Bird(256 × 256) 5.2056 5.2801 0.7293 0.6673 58.62% 41.35%
Leopard(512 × 512) 4.8769 6.1125 0.6521 0.7125 33.75% 31.22%
Landscape(1024 × 1024) 5.3333 4.9956 0.8032 0.7288 28.68% 39.66%
Cipher-text Name/Size IE Relative entropy Redundancy
L-image U-image L-image U-image L-image U-image
Face(300 × 300) 7.9954 7.9986 0.9994 0.9998 0.0575% 0.0175%
Lamp(320 × 480) 7.9959 7.9978 0.9995 0.9997 0.0513% 0.0275%
Bird(256 × 256) 7.9931 7.9946 0.9994 0.9997 0.0375% 0.0032%
Leopard(512 × 512) 7.9968 7.9975 0.9996 0.9998 0.0588% 0.0215%
Landscape(1024 × 1024) 7.9986 7.9981 0.9999 0.9999 0.0630% 0.0344%
Theoretical values 8.0000 8.0000 1.0000 1.0000 0.0000 0.0000

IE: information entropy.

Plaintext sensitivity and cipher-text sensitivity

Plaintext sensitivity

In applied cryptography, plaintext sensitivity refers to the degree of impact that variations in the plaintext have on the cipher-text. This is a crucial measure of an image encryption algorithms defense against differential attacks such as chosen-plaintext attacks. An encryption algorithms sensitivity to plaintext is considered if a minor change in the plaintext outcomes in a significant alteration of the cipher-text, with keys remaining constant.

Assuming a pixel increment in the plaintext is 1012 . For the “Face” and “Lamp” images, sensitivity indicators were calculated for the ciphers before and after plaintext variation and recorded in Table 9. The related data of other test images of different sizes including 256 × 256, 512 × 512, and 1024 × 1024 are also listed in Table 9.The data, closely aligned with theoretical values, confirm the proposed algorithm's plaintext sensitivity.

Table 9.

Plaintext sensitivity analysis of the proposed algorithm.

Name /Size CORR SSIM NPCR UACI BACI
L-image U-image L-image U-image L-image U-image L-image U-image L-image U-image
Face(300 × 300) −5.1508 × 10−4 −0.0040 0.0050 0.0021 99.6833% 99.6279% 33.4407% 33.5629% 26.7558% 26.8586%
Lamp(320 × 480) −9.4383 × 10−4 −0.0019 0.0048 0.0040 99.6042% 99.5883% 33.4522% 33.5095% 26.7864% 26.8023%
Bird(256 × 256) −4.8876 × 10−4 −0.0033 0.0031 0.0017 99.6739% 99.6231% 33.4452% 33.5130% 26.7428% 26.8190%
Leopard (512 × 512) −3.2145 × 10−4 −0.0027 0.0052 0.0049 99.6457% 99.6288% 33.4538% 33.5061% 26.7735% 26.7821%
Landscape (1024 × 1024) −6.1964 × 10−4 −0.0037 0.0063 0.0068 99.6313% 99.6137% 33.4689% 33.4702% 26.7653% 26.7746%
Theoretical values 0.0000 0.0000 99.6094% 33.4636% 26.7712%

CORR: correlation coefficient; NPCR: amount of pixels change rate; UACI: unified average changing intensity; BACI: block average changing intensity.

Cipher-text sensitivity

Cipher-text sensitivity, akin to plaintext sensitivity, assesses the decryption algorithm's capacity to withstand differential attacks like chosen-cipher-text attacks. It gauges the extent to which variations in the cipher-text affect the plaintext. Employing a specific set of keys for encryption and decryption, when a slight alteration in the cipher image leads to a significant change in the plaintext upon decryption, the decryption algorithm is considered sensitive to the cipher image.

Variation in the cipher image was set to 1012 . Sensitivity data for the “Face” and “Lamp” images, before and after cipher image alteration, are tabulated in Table 10. The related data of other test images of different sizes including 256 × 256, 512 × 512, and 1024 × 1024 are also listed in Table 10.

Table 10.

Cipher-text sensitivity analysis of the proposed algorithm.

Image size Image name CORR SSIM NPCR
L-image U-image L-image U-image L-image U-image
Face 300 × 300 −0.0040 −0.0012 0.0024 0.0040 99.7912% 99.8076%
Lamp 320 × 480 −0.0020 0.0061 0.0041 0.0122 99.7839% 99.7927%
Bird 256 × 256 0.0018 0.0021 0.0015 0.0026 99.7301% 99.7569%
Leopard 512 × 512 0.0035 0.0043 0.0048 0.0076 99.6920% 99.7233%
Landscape 1024 × 1024 0.0058 0.0065 0.0075 0.0081 99.6386% 99.6549%
Theoretical values 0.0000 0.0000 99.6094%

CORR: correlation coefficient; NPCR: amount of pixels change rate.

This data substantiates the sensitivity of the proposed decryption algorithm to cipher image changes.

The average running time of the algorithm

This subsection gives the running time evaluation of the proposed algorithm. The computational environment comprises an Intel Core (TM) i7 CPU (2.4 GHz), 8.0 GB RAM, running Windows 10. For images of different sizes, we tested and calculated the average running time of the algorithm. The average times of the algorithm running 100 times are listed in Table 11. Besides, Figure 9 demonstrates the time interpolation curves using the method of piecewise cubic Hermite interpolation polynomial. It is obvious that the algorithm time consumption is quite limited and the proposed algorithm is easy to implement in existing computing environments. It should be noted that, since some random factors are involved in the algorithm and the experimental result is closely related to the computation configuration, the given data is relative and only for reference.

Table 11.

The data of the average times of the algorithm.

Name Bird Face Lamp Leopard Landscape
Size 256 × 256 300 × 300 320 × 480 512 × 512 1024 × 1024
Average encryption time (s) 0.0838 0.0623 0.0764 0.2633 0.5641
Average decryption time (s) 0.0786 0.0579 0.0688 0.2440 0.4601

Figure 9.

Figure 9.

Time interpolation curves of PCHIP. PCHIP: piecewise cubic Hermite interpolation polynomial.

Comparison with other algorithms

This subsection compares the proposed algorithm with those cited in References,7,9, 25and, 26 based on experimental data from the “Lamp” image. It is noteworthy that the comparison involves only a subset of performance indicators. The experimental results are presented in Table 12.

Table 12.

Selected performance information for various algorithms.

Reference 7 Reference 9 Reference 25 Reference 26 Our algorithm Theoretical value
Key space 2412 210240 2128 2199 21256 ≥2128
IE 7.9997 7.9996 7.9972 7.9976 7.9982 8.0000
Plain sensitivity NPCR 99.6118% 99.7312% 99.6233% 99.6218% 99.5963% 99.6094%
UACI 33.5036% 33.4226% 33.6544% 33.5527% 33.4809% 33.4636%
Note The data represents the mean value of two original data sets.

NPCR: amount of pixels change rate; UACI: unified average changing intensity; IE: information entropy.

For all five algorithms under comparison, as shown in Table 12, all indicators meet the security requirements. Theoretically, this suggests the feasibility and effectiveness of these algorithms. To comprehensively evaluate their quality, identical performance indicators were rated on a scale. The scoring criteria were as follows: 5 points for the best performance, descending to 1 point for the least effective. The scoring outcomes are detailed in Table 13. These scores lead to the conclusion that the proposed algorithm holds local comparative advantages and good overall performance.

Table 13.

Performance data score of different algorithms.

Reference 7 Reference 9 Reference 25 Reference 26 Our algorithm
Key space Score 3 5 1 2 4
IE Score 5 4 1 2 3
NPCR Score 5 1 2 3 4
UACI Score 4 3 1 2 5
Sum 17 13 5 9 16

NPCR: amount of pixels change rate; UACI: unified average changing intensity; IE: information entropy.

Conclusions

Aiming at some unresolved issues in image encryption, such as key space, password generation, security verification and encryption schemes, this paper constructed an encryption algorithm for digital images using 6D-CNN and matrix LU decomposition. The paper comprehensively discussed the mathematical foundation of algorithms, password generation methods, image decomposition encryption schemes, and the system security of the proposed algorithms. The simulation experiment has been conducted in the paper. Compared with some similar existed encryption schemes, the proposed algorithm showed comprehensive advantages, which include a significant reduction in cipher length, a decreased likelihood of key and cipher-text interception during transmission, a substantially large key space. In contrast to traditional image compression encryption, this paper proposed a novel image decomposition encryption mode. This undoubtedly opens up new ideas for enhancing image encryption and transmission security. As for the computational complexity of the algorithm, we will further discuss in the future.

Acknowledgments

This work is supported by the National Natural Science Foundation of China under Grant No. 61702153.

Footnotes

Authors’ contributions: LT and XL contributed in conception, design, manuscript drafting, and manuscript review. LH contributed in funding acquisition, software, and visualization. BH contributed in validation and resources.

Funding: The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the National Natural Science Foundation of China, (grant number 61702153).

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

References

  • 1.Zhang Y. Chaotic Digital Image Cryptosystem. Beijing: Tsinghua university press, 2016. [Google Scholar]
  • 2.Guo FM, Tu L. Application of Chaos Theory in Cryptography. Beijing: Beijing Institute of Technology Press, 2015. [Google Scholar]
  • 3.Yang Y. Development and future of information hiding in image transformation domain: a literature review. 2022 4th International Conference on Image Processing and Machine Vision, ACM, 2022. [Google Scholar]
  • 4.Feng L, Du J, Fu C. Digital image encryption algorithm based on double chaotic map and LSTM. Comput Mat & Continua 2023; 77: 1645–1662. [Google Scholar]
  • 5.Ma X, Wang Z, Wang C. An image encryption algorithm based on Tabu search and hyperchaos. Int J Bifurcation Chaos 2024; 34: P2450170 0218–1274. [Google Scholar]
  • 6.Lin Y, Yang Y, Li P. Development and future of compression-combined digital image encryption: a literature review. Digit Signal Process 2025; 158: 104908. [Google Scholar]
  • 7.Feng W, Yang J, Zhao X, et al. A novel multi-channel image encryption algorithm leveraging pixel reorganization and hyperchaotic maps. Mathematics 2024; 12: P39172227–7390. [Google Scholar]
  • 8.Feng W, Zhang J, Chen Y, et al. Exploiting robust quadratic polynomial hyperchaotic map and pixel fusion strategy for efficient image encryption. Expert Syst Appl 2024; 246: P123190 0957–4174. [Google Scholar]
  • 9.Raghuvanshi KK, Kumar S, Kumar S, et al. Image encryption algorithm based on DNA encoding and CNN. Expert Syst Appl 2024; 252: 124287. [Google Scholar]
  • 10.Rohhila S, Singh AK. Deep learning-based encryption for secure transmission digitalimages: a survey. Comput Electr Eng 2024; 116: 109236. [Google Scholar]
  • 11.Ye C, Tan S, Wang J, et al. Social image security with encryption and watermarking in hybrid domains. Entropy 2025; 27: P276 1099–4300. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Yu F, He S, Yao W, et al. Bursting firings in memristive Hopfield neural network with image encryption and hardware implementation. IEEE Trans Comput Aided Des Integr Circuits Syst 2025; P1: 0278–0070. [Google Scholar]
  • 13.Yu F, Su D, He S, et al. Resonant tunneling diode cellular neural network with memristor coupling and its application in police forensic digital image protection. Chin Phys B 2025; 34: P050502 1674–1056. [Google Scholar]
  • 14.Yu F, Zhang S, Su D, et al. Dynamic analysis and implementation of FPGA for a new 4D fractional-order memristive Hopfield neural network. Fractal Fractional 2025; 9: P115 2504–3110. [Google Scholar]
  • 15.Yu F, Tan B, He T, et al. A wide-range adjustable conservative memristive hyperchaotic system with transient quasi-periodic characteristics and encryption application. Mathematics 2025; 13: P726 2227–7390. [Google Scholar]
  • 16.Preishuber M, Hütter T, Katzenbeisser S. Depreciating motivation and empirical security analysis of chaos-based image and video encryption. IEEETrans Information Forensics Security 2018; 13: 2137–2150. [Google Scholar]
  • 17.Wang X, Xu B, Zhang H. A multi-ary number communicationsystem based on hyperchaotic system of 6th-order cellular neural network. Commun Nonlinear Sci Numer Simul 2010; 15: 124–133. [Google Scholar]
  • 18.Chua LO, Yang L. Cellular neural networks: theory. IEEE Trans Circuits Syst 1988; 35: 1257–1272. [Google Scholar]
  • 19.Miller JE, Moursund DG, Duris CS. ElementaryTheoryandApplication of Numerical Analysis. New York: Dover Publications, 2011. [Google Scholar]
  • 20.Zhao B, Chen M, Zou FS, et al. Proficiency in MATLAB-Science computation and the application of data statitics. Beijing: Posts and Telecommunications Press, 2018. [Google Scholar]
  • 21.Dong L, Yao G. Method for generating pseudo random numbers based on cellular neural network. Chinese J Commun 2016; 37: 2016252(1–7). [Google Scholar]
  • 22.Rukhin A, Nechvatal J, Smid M, et al. A statistical test suite for random and pseudorandom number generator for cryptographic applications. Special Publication 800-22 Revision 1a. Maryland: National Intitute of Standards and Technology (NIST), 2010. [Google Scholar]
  • 23.101_ObjectCategories . The Caltech 101 dataset. http://www.vision.caltech.edu/Image_Datasets/Caltech101/Caltech101.html#Download,2020.
  • 24.Wu Y, Zhou Y, Saverriades G, et al. Local Shannon entropy measure with statistical tests for image randomness. Inf. Sci 2013; 222: 323–342. [Google Scholar]
  • 25.Wang S, Wang C, Xu C. An image encryption algorithm based on a hidden attractor chaos system and the Knuth–Durstenfeld algorithm. Opt Lasers Eng 2019; 128: 105995. [Google Scholar]
  • 26.Zhou K, Fan J, Fan H, et al. Secure image encryption scheme using double random-phase encoding and compressed sensing. Opt Laser Technol 2020; 121: 105769. [Google Scholar]

Articles from Science Progress are provided here courtesy of SAGE Publications

RESOURCES