Skip to main content
Biomedical Engineering Letters logoLink to Biomedical Engineering Letters
. 2018 Jan 31;8(2):239–247. doi: 10.1007/s13534-018-0057-4

Increasing the quality of reconstructed signal in compressive sensing utilizing Kronecker technique

H Zanddizari 1,, S Rajan 2, Houman Zarrabi 3
PMCID: PMC6208527  PMID: 30603207

Abstract

Quality of reconstruction of signals sampled using compressive sensing (CS) algorithm depends on the compression factor and the length of the measurement. A simple method to pre-process data before reconstruction of compressively sampled signals using Kronecker technique that improves the quality of recovery is proposed. This technique reduces the mutual coherence between the projection matrix and the sparsifying basis, leading to improved reconstruction of the compressed signal. This pre-processing method changes the dimension of the sensing matrix via the Kronecker product and sparsity basis accordingly. A theoretical proof for decrease in mutual coherence using the proposed technique is also presented. The decrease of mutual coherence has been tested with different projection matrices and the proposed recovery technique has been tested on an ECG signal from MIT Arrhythmia database. Traditional CS recovery algorithms has been applied with and without the proposed technique on the ECG signal to demonstrate increase in quality of reconstruction technique using the new recovery technique. In order to reduce the computational burden for devices with limited capabilities, sensing is carried out with limited samples to obtain a measurement vector. As recovery is generally outsourced, limitations due to computations do not exist and recovery can be done using multiple measurement vectors, thereby increasing the dimension of the projection matrix via the Kronecker product. The proposed technique can be used with any CS recovery algorithm and be regarded as simple pre-processing technique during reconstruction process.

Keywords: Signal Recovery, Compressed sensing, Kronecker technique, Compression

Introduction

Compressive sensing (CS), a new method of representing signal by fewer samples, is used in various applications. CS, was introduced for generating samples well below the Nyquist rate, and has been used in signal processing, image processing, data compression and encryption [13] and in biomedical engineering applications such MRI [4] and Echocardiographic Images [5].

Basically, CS has two main phases; a sensing or compression phase and a reconstruction or recovery phase. In compression phase, a rectangular matrix multiplies a vector, that is the signal to be compressed. This phase is very simple, fast and energy efficient. Efficient and fast compressors based on CS are already available, [69]. The effectiveness of different compression algorithms is compared using compression ratio (CR). Compression ratio (CR) is defined as follows:

CR(%)=1-samplesincompressed signalsamplesintheoriginalsignal×100 1

As the process of compression maps the original signal from a higher dimension space into a lower dimension space, compressive sensing can also be assumed as an encryption system [10, 11]. For instance, CS has been used for simultaneous compression and encryption of medical images [12, 13]. The secrecy of CS-based cryptosystems is high, because without knowing the key (projection matrix) reconstruction is impossible [14]. CS shifts the computational burden from the compression phase to the recovery phase. In the recovery phase, solution for an underdetermined system of equations is carried out. Solving such equations to obtain sparsest response requires computationally intense approaches. Nevertheless, in many applications of CS, the burden of computation in recovery phase is transferred to the third party, i.e. to the cloud environment. In addition, in remote sensing scenario, remote sensors are considered to have limited storage and computational capability while the remote centers have large storage and unlimited computational capabilities. Hence such remote centers generally can handle the recovery phase.

Compressive sensing

Compressive sensing (CS) is a sampling model for acquiring and reconstructing a sparse signal, by solving an underdetermined equation system, [2]. For CS, signal xRNneedstobeeither sparse or compressible. Signal s is called k-sparse if it has only k nonzero coefficients and (N-k) zero coefficients. In the sparse domain most practical signals have k significant nonzero coefficients and (N-k) negligible coefficients, Such signals are called compressible signals. Orthonormal projection matrices such as DCT, Fourier or wavelet basis is used to map x to its sparse domain, i.e. x=ψs. The product of a projection matrix ΦM×N,MN and a k-compressible signal xRN will generate the measurement vector yRM, i.e.y=Φx=Φψs, where ψ is a known sparsifying basis and for brevity we set A=Φψ. In the recovery phase, for noiseless case, if s is sparse (kN), then it can be reconstructed from M measurements by solving the 1minimization problem,

mins1subject toAs=y 2

And also for noisy case,

mins^RNs^1subjecttoy-As^2<ϵ 3

where .2 is Euclidian norm which provides the energy of signal s, s2=s12+s22++sN22, and .1 is 1-norm of a vector, ϵ is the upper bound on the Euclidean norm of the contributing noise, and s and s^ are recovered sparse signal in noiseless and noisy case, respectively. For stable and exact recovery, Restricted Isometry Property (RIP) is a sufficient condition for exact reconstruction of sparse signal,

(1-δk)s22As22(1+δk)s22 4

where δk is isometry constant of a matrix A, and its value belongs to the interval of zero to one, [15]. Unfortunately checking the RIP of a matrix or calculating its isometry constant is practically impossible. But, if MCklog(N/k), for some large enough M and N, and a constant C as a function of M/N. with overwhelming probability, matrices with entries of i.i.d. Gaussian (zero mean and variance 1/M) preserve RIP condition [15]. In addition, sub-Gaussian random matrices, namely, Bernoulli distribution with independent entries taking ±1/M also yields also preserves RIP condition [16].

Besides these random projection matrices, there are other ways to choose a projection matrix. For example, a projection matrix can be obtained from orthonormal bases. Candès et al. [16] proved that if U is an orthonormal matrix, and M rows of this matrix at random are selected as a projection matrix, and by renormalizing its columns it can be used as a projection matrix provided

kC.1μΦ,ψ2.M(logN)6,μΦ,ψ=N.max|<Φi,ψj>|,ΦiΦ,ψjψ, 5

where k is the order of sparsity, μΦ,ψ is mutual coherence between Φ and ψ that measures their similarity. For instance, if we choose the rows of Φ from Fourier bases, and ψ as an identity matrix, then μ = 1. Matrices that provide a smaller μ are desirable for sensing. Also, maximum coherence of a matrix, can be used to verify the RIP condition. By definition, maximum coherence of a matrix AM×N is

μ=maxij<Ai,Aj>Ai2Aj21i,jN 6

where Ai and Aj denote columns of A. Welch bound [17] may be used to determine the smallest μ. Welch bound states that any arbitrary AM×N has μ(N-M)/M(N-1). Unfortunately, a general method to generate matrices that meet this bound has not been introduced yet. Given the matrix A with its μ, it can preserve RIP condition of order 2k with δ2k<μ(2k-1) while 2k<1/μ+1, [18]. Once the appropriate matrices are designed or chosen, then sensing can be performed. After the sensing phase, in the recovery phase, we need an algorithm to recover the initial signal. There are numerous algorithms to reconstruct initial signal, including interior-point algorithms [19, 20], gradient projection [21], greedy approaches such as orthogonal matching pursuit (OMP) [22, 23], and smooth- zero norm [24].

Length of sparse signal in CS

The length of sparse signal plays a significant role in the compression. The length of sparse signal has direct relation with the size of projection matrix, the compression ratio, the computation complexity, and the order of sparsity. Given ym×1=Φm×nxn×1, length of x is equal to the number of columns in Φ; as the former increases, the latter increases accordingly. Then, Φ has more elements and require much storage space. In addition, the product of Φ by x requires m×n multiplications, and m×(n-1) additions. For instance, given CR=50\% or m=n/2, for sensing 256 samples, if we choose n=256, we need 32768 multiplication operations, while if we divide it into two separate segments of n=128 samples, then we need 8192 × 2 = 16384 multiplication operations. Thereby, it will be more efficient to sense signals in smaller lengths. This observation will be more relevant, when we use CS in devices such as a wearable ECG recorders that have limited power supply, memory for storage and computational capability. Wearable ECG recorders, collect ECG signals for several days and nights, and transfer them to a remote medical centre for further processing. In such devices with limited resources, CS should be applied in a fast and energy-efficient manner. When sensors acquire ECG signal in smaller lengths, fewer multiplication and addition operations are required during the compressive phase to obtain the measurement vector. As order of sparsity and number of measurements are related to the length of the signal, the choice of length of the signal has implications during the recovery phase. In this paper, we propose a Kronecker-based recovery of compressively sampled signals. During the sensing phase, signals are sensed with limited number of samples while during the recovery phase we concatenate measurement vectors to do recovery in larger size, and utilizing Kronecker technique to increase the recovery quality.

Kronecker product

If B is a p×q matrix and A is an m×n matrix, then the Kronecker product of B and A is the pm×qn matrix

Bp×qAm×n=b11Ab12Ab1qAb21Ab22Ab2qAbp1Abp2AbpqA 7

For a special case, when B is an identity matrix Ip×p—a square matrix with ones on the main diagonal and zeros elsewhere—the Kronecker product is

Ip×pAm×n=A000A000A 8

Orthonormal matrices such as Identity matrix have RIP constant δK=0. Duarte et al. have proved that RIP constant for new measurement matrix of Kronecker product obeys the following inequality

δKA1A2AIi=1I1+δKAi-1 9

where δKAi is the RIP constant of matrix Ai, [25, 26]. If RIP constant is smaller, then the matrix is more efficient for sensing. In (10), the output RIP constant is not larger than that of the main matrix. Although we cannot certainly say that Kronecker product will improve the RIP constant, but we can assert that it will not have a negative effect on RIP constant of Kronecker product.

δKIA1+δKI1+δKA-1δKIA1+δKA-1δKIAδKA 10

Proposed Kronecker technique

In the recovery phase, the measurement vector and the product of projection matrix and sparsifying basis are assumed to be available. Given these pairs (ym×1,Φm×nψn×n), the initial sparse signal sn×1 can be reconstructed. For simplicity, we denote the output of recovery algorithms by F

sn×1=F(ym×1,Φm×nψn×n) 11

In many CS applications, due to the limited capability of the device or due to the need for fast compression, sensing phase is done in piecewise fashion. The signal instead of being sensed as a single vector of length N, it is sensed as shorter signals of length less than N leading to measurement vectors of a number of consecutive segments. Let Y={ym×11,ym×12,,ym×1l} as a set of consecutive measurement vectors where ym×1i stands for ith measurement vector and S={sn×11,sn×12,,sn×1l} is a set of corresponding sparse vectors.

Let us consider an orthonormal dictionary as the sparsifying dictionary. Without loss of generality, let us consider the orthonormal DCT dictionary as the sparsifying dictionary as it is most commonly used for compression in CS. Let us assume that we have a sensed a sparse signal of length 2n, which may also be viewed as either one long sensed signal or a two consecutively sensed sparse signals each of equal lengths n leading to measurement vectors of length 2m and m respectively. Recovery may be attempted in one of the three methods given below, of which the third method (Method C) is the proposed method.

Method A

This method refers to the common CS recovery methods available currently in the literature in which each measurement vector is considered separately for reconstruction.

Method B

In Method B, instead of using just one long measurement vector, the measurement vector may be segmented into several consecutive measurements of equal lengths. Alternatively, we may view this as a sensor that is producing several smaller equal length compressively sensed measurements. Without loss of generality, let us assume that we have divided the single measurement vector into two equal length measurement vectors. If we concatenate two consecutive measurement vectors, y2m×1=[ym×11,ym×12], and then via A^2m×2n=I2×2(Φm×nψn×n), two consecutive sparse vectors s2n×1=[sn×11,sn×12]=F(y2m×1,A^2m×2n) can be reconstructed. Instead of applying recovery algorithms two times for two measurement vectors, we concatenate measurement vectors and then apply the recovery algorithm at once. In this case, the new matrix is

A^2m×2n=I2×2(Φm×nψn×n)=(I2×2Φm×n)(I2×2ψn×n)=Φm×nψn×n00Φm×nψn×n=Φm×n00Φm×nψn×n00ψn×n 12

For simplicity we exemplified concatenating two measurement vectors, but this procedure can be done for larger number of measurement vectors. For concatenating t consecutive measurement vectors, we have ytm×1i=[ym×1i,ym×1i+1,,ym×1i+t-1], where i = 0,1,…,lt and l is the total number of segments and is a multiple of t and via A^tm×tn=It×t(Φm×nψn×n), one can reconstruct t corresponding sparse vectors

stn×1i=[sn×1i,sn×1i+1,,sn×1i+t-1]=F(ytm×1i,A^tm×tn) 13

Since in this paper, the factor t repetitively is used, we call this factor as Kronecker factor.

Method C

In (12), the projection matrix Φ is fixed and its elements are unchangeable. But, ψ can be changed; hence, we can choose the best one. Instead of using matrix (12) we propose (14), and we will show that this matrix can improve the quality of reconstruction.

Φm×n00Φm×n.ψ2n×2n 14

In (14), we considered two consecutive measurement vectors, however, we can increase the number of consecutive measurement vectors. For t>2 the sparsifying basis will be ψtn×tn.

We assert that our proposed technique decreases the mutual coherence factor, thereby improving the signal quality. As aforementioned, in the recovery phase lower mutual coherence leads to better reconstruction. For a given measurement vector and a projection matrix, we try to apply the sparsifying basis in an efficient manner. In CS applications, we try to find a projection matrix to be incoherent enough with sparsifying basis; but in our work, we assert that for a chosen projection matrix and sparsifying basis if we apply our Kronecker technique, we decrease the mutual coherence. Hence, we do not change the nature of sparsifying basis or projection matrix. For simplicity, we use Φ=(It×tΦm×n), ψs=(It×tψn×n), and ψl=ψtn×tn as symbols for projection matrix denoting, small sparsifying basis, and large sparsifying basis, respectively. The coefficients of the sparsifying dictionary, which is orthonormal DCT in this paper ψn×n, are shown in (15),

ψij=wjcosπ2i-1j-12n,i=1,2,,n
wj=1nj=12n2jn 15

where ψij corresponds to the ith row and jth column of the dictionary. Equation (15) shows that the maximum value of the element in a DCT dictionary is 1/n, where n is the number of rows or columns; therefore, the maximum element in ψs (ψn×n) is t times greater than that of the ψl(ψtn×tn). In other words, if we increase the size of a normalized DCT dictionary, the maximum element in dictionary will be decreased, thereby having smaller coefficients. Because of zero matrices in projection matrix, in (14), just 50% of DCT coefficients are used in each multiplication and others do not contribute. Since each ψl is normalized, if we use 50% of its coefficients, its Euclidian norm or energy will be smaller. On the contrary, in Method B, all coefficients in ψs is used in multiplication and its energy is equal to one. For instance, consider two DCT dictionaries with different sizes, one with 64 columns and other with 512 columns. Since both of them are orthonormal, for the dictionary with 64 columns, the normalized energy of each column is shared equally by 64 elements; on the other hand, for the dictionary with 512 columns, it will be shared equally by 512 elements. The larger DCT matrix has smaller elements than that of the smaller DCT matrix. Hence, the mutual coherence of a large DCT matrix is lower than small DCT matrix. Projection matrices such as random matrices, theoretically, are assumed to be incoherent with any sparsifying dictionary. By choosing any sparsifying matrix according to our proposed method, Method C, a reduced mutual coherence between the projection matrix and the sparsifying matrix may be obtained. This is shown by the equation below

μΦ,ψl<μΦ,ψs 16

Figure 1 shows the mutual coherence of different projection matrices, namely, random Gaussian distribution N(0,1/m), symmetric Bernoulli distribution (PΦij=±1/m=0.5), general orthogonal measurement ensembles which in [16] was introduced as a reliable way to generate projection matrices, and selecting m rows of Fourier basis at random. The mutual coherence of these mentioned projection matrices has been checked by simulation using Method B and our proposed approach. In this simulation, n=32, m=16, t=4. Therefore, ψs=(I4×4ψ32×32), and ψl=ψ128×128. The simulation is run 1000 times. Also, simulations (omitted for lack of space) for different values of (n,m,t) substantiated this reduction in mutual coherence. In Fig. 1, the horizontal axis shows the number of iterations. For instance, a value 100 on the x axis means for 100th time random matrices, Gaussian N(0,1/16), and symmetric Bernoulli distribution PΦij=±1/16=0.5 were generated to check their mutual coherence with DCT bases in both Methods B and C. In addition, projection matrices based on orthogonal ensembles and Fourier basis were generated, i.e. selecting m rows of these matrices at random. We did not show the ensemble average results of our simulation just to emphasis the fact that for all 1000 times, the proposed Method C provided better mutual coherence than Method B. Figure 1 shows that for DCT bases, if we use our proposed Kronecker technique, the mutual coherence will be decreased.

Fig. 1.

Fig. 1

Comparing the mutual coherence of different projection matrices with two sparsifying bases, according to Method B and Method C

Experimental results and discussion

To demonstrate the applicability of our proposed method for improvement of quality of reconstructed signal, we carried out reconstruction of compressively sampled ECG signals. We chose to demonstrate our proposed method for ECG signals as they are highly compressible. We chose the ECG signal, record no.100 from standard MIT-BIH Arrhythmia Database, [27]. We conducted our simulations in MATLAB with an Intel core 2 Due CPU, with a 2.00-GHz clock frequency and a 3.00-GHz RAM module. We used SNR to measure the quality of reconstructed signal as follows

SNR=20log10x2x-x^2 17

where x is the original ECG signal and x^ is the recovered ECG signal. Although other researchers have used percentage root mean square difference (PRD) [28], for quality measure of compression, inference obtained using SNR will be the same as PRD. In our experiment, 1024 samples of an ECG signal were chosen, and for different CR, namely, 50, 75 and 87%, we checked the effectiveness of our technique, Method C.

In Fig. 2, “Ordinary recovery” refers to Method B and “Proposed recovery” refers to Method C. In Fig. 2, the results of a comprehensive experiment on 1024 samples of the ECG signal are shown. Theoretically, it is possible to sense whole 1024 samples at once; however, as aforementioned, in practice piecewise sensing is more efficient. To do the piecewise sensing process, we divide the 1024 ECG samples into a number of segments of 16, 32, 64, 128, 256, and 512 samples. For example, 1024=8×128, we sense whole signal in eight 128-sample blocks. Similarly, as 1024=32×32, we use thirty-two 32-sample blocks, and so on. For CR = 50%, CR = 75%, and CR = 87%, each 128-sample block is mapped into 64, 32, 16 measurements, respectively. We used three types of projection matrices normally used in CS: Gaussian N(0,1/m), and symmetric Bernoulli distribution PΦij=±1/m=0.5, and selecting m rows of an orthonormal basis at random. We used SL0 (smoothed L0 norm) [24] to reconstruct the signal.

Fig. 2.

Fig. 2

Comparing the quality of reconstruction for ECG signal. Projection matrices: a Gaussian distribution, b Bernoulli distribution, c General Ensemble according to [16]. The results are based on SNR scale, and each one is obtained from the mean of 500 tests for CR = [50%, 75%, 87%], and n=16,32,64,128,512. Ordinary method is recovery based on Method B, and proposed method is based on Method C

For each n,CR combination, we ran 500 times recovery via Methods B and C, and we have presented the averaged result. In Fig. 2, (a) is the result of Gaussian distribution, (b) is the result based on Bernoulli distribution and (c) is the result that was obtained from a measurement matrix whose rows have been selected at random from an orthonormal basis. In Method C, 512 samples were considered as a reference of Kronecker Factor, tn=512n. For instance, given n=16, t16=51216=32,ψs=(I32×32ψ16×16), ψs=(I32×32ψ16×16), and ψl=ψ512×512. With 32 segments concatenated, we need to do recovery twice as we have 64 segments with 16 samples each. As it can be seen, when n=512, t512=512512=1, ψs=(I1×1ψ512×512), and ψl=ψ512×512. Since ψs=ψl, both methods have same SNR. After choosing CR and projection matrices, we ran the recovery of ECG signal using smoothed L0 norm (SL0) algorithm. From Fig. 2, it can be seen that proposed technique increases the quality of reconstructed signal. As the CR increases, the quality of reconstruction decreases. Once sensing is done, we cannot change the projection matrix or increase the number of measurements. However, for a given measurements and a projection matrix, through Kronecker technique, we can increase the quality of reconstructed signal.

To show effect of the proposed method on quality of the reconstructed ECG signal quality, 1024 sample of ECG signal was chosen and compressed in a piecewise manner, with n=64, CR=50\%, Kronecker factor at 8, a Gaussian projection matrix N(0,1/32) and accordingly ψs=(I8×8ψ64×64), and ψl=ψ512×512. Figure 3 shows the difference in the quality of reconstruction between our proposed Kronecker technique (43.5dB) and Method B (37.1dB). The proposed method, in this case, delivers an improvement of 6dB and thus a better quality reconstructed signal.

Fig. 3.

Fig. 3

CS-recovery ECG signal, Method C (upper graph) and Method B (lower graph). SNR of Method C (43.5dB) which is the proposed method and the SNR of ordinary CS recovery, Method B 37.1dB

Comparison of Methods A and B

The quality of reconstructed signal in Method A is equal to that of Method B. Kronecker product increases the dimension, but the content of the reconstructed signal does not change. This claim is proved in the Appendix. In the following paragraph, we will demonstrate the quality of reconstruction is the same for both the methods. We will use OMP as the reconstruction algorithm for discussion sake.

When we have two measurement vectors,(y1,y2), if we reconstruct using Method A, OMP algorithm must be run twice. Suppose after the first run, we obtain the coefficients of the first sparse vector [s11,s21,,sn1], and after second run we obtain the coefficients of second sparse vector [s12,s22,,sn2]. OMP in first iteration tries to find the most correlated column with the measurement vector. OMP projects measurement vector to that column and then starts to find the most correlated column for obtained residue. In each iteration, OMP maps residue to the set of obtained columns and it prevents choosing the same column repetitively and this leads to faster convergence. If we run Method B, the coefficients will be exactly augmented as [s11,s21,,sn1,s12,s22,,sn2]. In Method B, the position and the value of coefficients are fixed, however, the order of finding coefficients will be different. If we have strictly sparse vectors—exactly k nonzero—the results of Methods A and B are the same. However, when we deal with noisy and compressible signals, the results of A and B will be approximately equal to each other; the difference is less than the contributed noise energy and generally the difference is negligible.

Conclusion

We have presented a novel way of improving the quality of reconstruction of compressively sampled signals through Kronecker technique. Through Kronecker technique during recovery, we have introduced a simple pre-processing in the recovery phase before applying the recovery algorithm. Although, we increased the dimension of the signal under recovery, instead of multiple recoveries, we have proposed a “one-shot” recovery. We believe that our proposed method has no restriction on the choice of sparsifying dictionaries and will perform equally well with both fixed and adaptive dictionaries. The proposed method also works for almost all well-known projection matrices such as random matrices, or deterministic matrices. We have demonstrated a practical application of our proposed technique through compression of ECG signal using CS. This proposed method is viable for wearable devices that have limited storing and processing capabilities. Such wearable devices like wearable ECG recorders, may compress the data in an energy efficient manner and send them to a remote processing centre where it can be recovered on demand. Using the proposed method, we were able to decrease the length of input signal in the compression phase, thereby decreasing the computation and the processing time. However, decreasing the length of input vector affects the order of sparsity and quality of signal. Quality of signal is then recovered through the Kronecker technique during the recovery process.

Appendix

To show that Methods A and B have equal quality, we start with the definition of CS recovery in (18)

minsRns1s.t.y=(Φψ)s
A=Φψ:minsRns1s.t.y=As
minsRni=1nsi
s.tyi=j=1naijsji=1,2,,m 18

For two measurement vectors, {y1,y2}Rm, that have been sensed by the same projection matrix Φm×n, according to Method A, (18) must be solved twice:

mins1Rni=1nsi1s.tyi1=j=1naijsj1i=1,2,,mmins2Rni=1nsi2s.tyi2=j=1naijsj2i=1,2,,m 19

In Method B, we concatenate the two measurement vectors into one measurement vector, y~=[y1,y2]R2m, and via A~2m×2n=I2×2Am×n=I2×2Φm×nψn×n to do the recovery:

mins~Rni=12ns~is.ty~i=j=12na~ijs~ji=1,2,,2m
mins~Rni=1ns~i+i=n+12ns~i
s.ty~i=j=12na~ijs~ji=1,2,,2m

Because of zero matrices in A~,

mins~Rni=1ns~i+i=n+12ns~i
s.ty~i=j=1na~ijs~ji=1,2,,my~i=j=n+12na~ijs~ji=m+1,,2m

y~i,i=1,2,,m contain exactly the elements of y1, and y~i,i=m+1,,2m contain elements of y2. Also, a~iji=1,2,,mj=1,2,,n and a~iji=m+1,,2mj=n+1,,2n contain the elements of matrix A. Hence,

mins~Rni=1ns~i+i=n+12ns~i
s.tyi1=j=1naijs~ji=1,2,,myi2=j=n+12naijs~ji=1,2,,m 20

In (20), two independent vectors are minimized via independent conditions. Let s~1Rn and s~2Rn comprise the first and second n components of s~R2n . Therefore, we can rewrite (20) into two independent minimization equations,

mins~1Rni=1ns~i1s.tyi1=j=1naijs~j1i=1,2,,mmins~2Rni=1ns~i2s.tyi2=j=1naijs~j2i=1,2,,m 21

Equation (21) is equal with (19), therefore Method A and B have same results. This procedure can be extended to more than two measurements.

Conflict of interest

There is no conflict of interests among authors.

Ethical approval

Authors used the data available in [27] for their study and did not collect data from any human participant or animal.

References

  • 1.Baraniuk RG. Compressive sensing. IEEE Signal Process Mag. 2007;24(4):118–121. doi: 10.1109/MSP.2007.4286571. [DOI] [Google Scholar]
  • 2.Donoho DL. Compressive sensing. IEEE Signal Process Mag. 2007; 24(4).
  • 3.Tropp JA, Laska JN, Duarte MF, Romberg JK, Baraniuk RG. Beyond Nyquist: efficient sampling of sparse bandlimited signals. IEEE Trans Inf Theory. 2012;56(1):520–544. doi: 10.1109/TIT.2009.2034811. [DOI] [Google Scholar]
  • 4.Lustig M, Donoho DL, Santos JM, Pauly JM. Compressed sensing MRI. IEEE Signal Process Mag. 2008;25(2):72–82. doi: 10.1109/MSP.2007.914728. [DOI] [Google Scholar]
  • 5.Gifani P, Behnam H, Haddadi F, Sani ZA, Shojaeifard M. Temporal super resolution enhancement of echocardiographic images based on sparse representation. IEEE Trans Ultrason Ferroelectr Freq Control. 2016;63(1):6–19. doi: 10.1109/TUFFC.2015.2493881. [DOI] [PubMed] [Google Scholar]
  • 6.Abo-Zahhad MM, Hussein AI, Mohamed AM. Compression of ECG signal based on compressive sensing and the extraction of significant features. Int J Commun Netw System Sci. 2015;8:97–117. [Google Scholar]
  • 7.Mamaghanian H, Khaled N, Atienza D, Vandergheynst P. Compressed sensing for real-time energy-efficient ECG compression on wireless body sensor nodes. IEEE Trans Biomed Eng. 2011;58(9):2456–2466. doi: 10.1109/TBME.2011.2156795. [DOI] [PubMed] [Google Scholar]
  • 8.Singh A, Dandapat S. Weighted mixed-norm minimization based joint compressed sensing recovery of multi-channel electrocardiogram signals. Comput Electr Eng. 2016;53:203–218. doi: 10.1016/j.compeleceng.2016.01.027. [DOI] [Google Scholar]
  • 9.Singh A, Dandapat S. Block sparsity-based joint compressed sensing recovery of multi-channel ECG signals. Healthc Technol Lett. 2017;4(2):50–56. doi: 10.1049/htl.2016.0049. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Rachlin Y, Baron D. The secrecy of compressed sensing measurements. In: 46th Annual Allerton conference on communication, 2008.
  • 11.Wanli X, Chengwen L, Guohao L, Rajib R, Wen H, Aruna S. Kryptein: a compressive-sensing-based encryption scheme for the internet of things. In: 16th ACM/IEEE International conference on information processing in sensor networks, April 2017.
  • 12.Wang C, Zhang B, Ren K, Roveda JM. Privacy assured outsourcing of image reconstruction service in cloud. IEEE Trans Emerg Topics Comput. 2013;1(1):166–177. doi: 10.1109/TETC.2013.2273797. [DOI] [Google Scholar]
  • 13.Zand H, Falahati A, Shahhoseini H. Secure reconstruction of image from compressive sensing in cloud. In: 2nd Annual conference of computer and IT at Tehran University, 2015.
  • 14.Orsdemir A, Altun HO, Sharma G, Bocko MF. On the security and robustness of encryption via compressed sensing. In: MILCOM 2008–2008 IEEE Military communications conference, (p. 1–7), Nov 2008.
  • 15.Candès E. The restricted isometry property and its implications for compressed sensing. Compte Rendus de l’Academie des Sci. 2008;346(1):589–592. [Google Scholar]
  • 16.Candès E, Romberg JK, Tao T. Stable signal recovery from incomplete and inaccurate measurements. Commun Pure Appl Math. 2006;59(8):1207–1223. doi: 10.1002/cpa.20124. [DOI] [Google Scholar]
  • 17.Welch L. Lower bounds on the maximum cross correlation of signals. IEEE Trans Inf Theory. 1974;20(3):397–399. doi: 10.1109/TIT.1974.1055219. [DOI] [Google Scholar]
  • 18.Bourgain J, Dilworth SJ, Ford K, Konyagin S, Kutzarova D. Explicit constructions of RIP matrices and related problems. Duke Math J. 2011;159(1):145–185. doi: 10.1215/00127094-1384809. [DOI] [Google Scholar]
  • 19.Chen SS, Donoho DL, Saunders MA. Atomic decomposition by basis pursuit. SIAM J Sci Comput. 1998;20(1):33–61. doi: 10.1137/S1064827596304010. [DOI] [Google Scholar]
  • 20.Berg EVD, Friedlander MP. Probing the pareto frontier for basis pursuit solutions. SIAM J Sci Comput. 2008;31(2):890–912. doi: 10.1137/080714488. [DOI] [Google Scholar]
  • 21.Figueiredo MAT, Nowak RD, Wright SJ. Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problem. IEEE J Selec Topics Signal Proc. 2007;1(4):586–597. doi: 10.1109/JSTSP.2007.910281. [DOI] [Google Scholar]
  • 22.Pati YC, Rezaiifar R, Krishnaprasad PS. Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In: 27th Asilomar Conference on signals, systems and computers, vol. 1, p. 40–44, 1993.
  • 23.Tropp JA, Gilbert AC. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans Inf Theory. 2007;53(12):4655–4666. doi: 10.1109/TIT.2007.909108. [DOI] [Google Scholar]
  • 24.Mohimani GH, Babaie-Zadeh M, Jutten C. A fast approach for over-complete sparse decomposition based on smoothed 0 norm. IEEE Trans Signal Process. 2009;57(1):289–301. doi: 10.1109/TSP.2008.2007606. [DOI] [Google Scholar]
  • 25.Duarte MF, Baraniuk RG. Kronecker compressive sensing. IEEE Trans Image Process. 2012;21(2):494–504. doi: 10.1109/TIP.2011.2165289. [DOI] [PubMed] [Google Scholar]
  • 26.Duarte MF, Baraniuk RG. Kronecker product matrices for compressive sensing. In: 2010 IEEE international conference on acoustics speech and signal processing (ICASSP).
  • 27.“MIT-BIH Arrhythmia Database,” [Online]. Available: http://www.physionet.org/physiobank/database/mitdb/.
  • 28.Zigel Y, Cohen A, Katz A. The weighted diagnostic distortion (WDD) measure for ECG signal compression. IEEE Trans Biomed Eng. 2000;47(11):1422–1430. doi: 10.1109/TBME.2000.880093. [DOI] [PubMed] [Google Scholar]

Articles from Biomedical Engineering Letters are provided here courtesy of Springer

RESOURCES