Skip to main content
Heliyon logoLink to Heliyon
. 2022 Jun 1;8(6):e09621. doi: 10.1016/j.heliyon.2022.e09621

Compressive spectrum sensing for 5G cognitive radio networks – LASSO approach

RS Koteeshwari a,b,, B Malarkodi a
PMCID: PMC9168607  PMID: 35677410

Abstract

In recent years, the importance of Artificial Intelligence is inevitable for effective performance in communication area. The progressing in standards from beyond 5G networks are compatible gadgets for incorporate wireless communication. Cognitive radio (CR) is a sensible and advanced scientific communication that can effectively handle the radio spectrum applications. Spectrum sensing (SR) is the primary role in CR. In SR, various Wide Band techniques suited for 5G were investigated in this paper. (Least Absolute Shrinkage and Selection Operator) LASSO is the suitable choice for communication in compressive sensing and recovery in wideband 5G networks. The obtained results were correlated with recent report. Further, the relative merit and demerits are discussed significantly.

Keywords: 5G networks, Compressed sensing, Recovery algorithm, LASSO


5G networks, Compressed sensing, Recovery algorithm, LASSO.

1. Introduction

The development of next generation 5G networks for mobile standards by several international bodies. The need of consumers always increases mainly in broadband communications. Of course, in this 2020 decade the technological development causes very high traffic rate even about 100 times or more. Due to over population and there is demand in devices which leads the increased number of mobiles and the affordability. It is predicted that by 2025, the number of devices linked to the Internet for communication will have surpassed 50 billion. This smart communication network can send big amounts of data considerably faster and efficiently connect a large number of devices [1, 2].

Compressed sensing (CS) has revolutionised signal processing, machine learning, and statistics, radically altering our understanding of sensing and data gathering. Beyond the traditional compressed sensing methods that gave rise to the discipline, the compressive framework implies the ability to perform measurements in real time and adapt to changing conditions. Adaptive sensing optimises the gain of new information by using previously acquired measurements to guide the design and selection of the next measurement.

Cognitive radio (CR) is a great strategy for exploiting dynamically changing spectrum holes and for making efficient use of the limited electromagnetic spectrum. CR determine unemployed radio frequency spectrum and adjusts its variables to make the stretch more efficient. Secondary users are those who are cognitive, but primary users are those who are licensed [3, 4, 5].

Narrow band sensing and broad band sensing were the two main types of spectrum sensing techniques accessible in CR, respectively [6, 7]. Many narrow band strategies have been presented, however in order to accomplish more opportunistic information processing, it is necessary to sense over a wider frequency range spanning from thousands of Mega hertz to a few Giga hertz [8, 9] are shown in Figure 1.

Figure 1.

Figure 1

Spectrum sensing types.

In CR the spectrum sensing technique is one of the main operations. Narrow band sensing and wide band sensing were the two principal categories of spectrum sensing techniques, respectively [10, 11, 12].

Compressive sensing (CS) is a set of techniques for describing a signal using a small number of measurements and then recovering the signal from these measurements. To recover largely the original signal from the compressed data is a vital role in CS process. The number of sample required was huge, making the sensing operation are complex and costlier one. To overcome these issues only compressive sensing is applied in 5G Cognitive Radio network [13, 14, 15].

2. Compressive sensing theory

To introduce Compressive sensing, a sparse signal xЄ; Сnx1 with sparse level k (k << n) is the measurement matrix characterizes this Φ Є Сmxn(m << n). y = ΦxСmx1 is the measured signal.x is recovered from the underdetermined set of equations y = Φx where Φ and x are given.x may not be sparse in and of itself, but it may be sparse in a transfigured form of domain, which is written as x = Ѱs, where s is the sparse signal and Ѱ is the transform matrix connected with the level of sparsity k.

Standard CS model consists of three fundamental parts such as (i) Transformation, (ii) Signal compression and (iii) Recovery Algorithms in Sparse. Sparse transformation transfer the original signal x to sparse signal s by transform matrix Ѱ. The design of Φ or Ɵ = ΦѰ is referred to as sparse signal compression. Φ should reduce measurement dimension while reducing information loss, which can be quantified in terms of the coherence or restricted isometry property (RIP) of Φ or Ɵ. For the reliable reconstruction of x or s from the observed signal y, sparse signal recovery methods are critical. In 5G, there is more number of secondary users and it is very difficult to sense all the bands in CR networks. Compressive sensing increases the speed of sensing by sensing limited number of samples.

Currently, there exists several reconstruction algorithms among them for sparse signal recovery they are divided into six types are discussed in the following section (Figure 2).

Figure 2.

Figure 2

Various Algorithms in recovery/reconstruction.

Because CS reduces the sample rate without sacrificing relevant information, it can be employed in MRI in medical imaging. With compressed sensing, a higher target resolution can be achieved with fewer samples. And the CS used in pixel cameras.

3. Recovery/reconstruction algorithms

3.1. Convex relaxation

This class of techniques uses linear programming to solve a convex optimization problem and obtain reconstruction signal [16]. Additional information was given in section 4.

3.2. Non convex minimization algorithm

In this techniques, very hard to solve the problems in sensible time and so using the idea of heuristic algorithms leads to obtain the results. Otherwise, the minimization technique in which optimized variables can be used to minimize the constraints through cyclical fashion and linearization technique. In some cases the genetic algorithms such as Focal underdetermined system solutions (FOCUSS) [17], Iterative Re-weighted least squares [18], Sparse Bayesian learning algorithm [19], Monte-Carlo based algorithms [20]. It is mainly used in medical imaging and streaming data reduction.

3.3. Greedy iterative algorithms

It is good for fast reconstruction with less mathematical complexity in sensing. It is a multistep process attains the results step by step through iteration. It includes matching pursuit and gradient pursuit [21, 22]. The main idea is least square error is minimized in every iteration process [23]. Matching Pursuit and its derivate, Orthogonal Matching Pursuit (OMP) having low implementation cost and speed recovery with some other limitations [24]. To overcome this issue some improved versions of OMP such as Regularized OMP [25], Stagwise OMP [21], Compressive Sensing Matching Pursuits (CoSaMP) [17], Subspace pursuits (SP) [26], Gradiant pursuits (GP) [27] and Orthogaonal Multiple Matching pursuits (OMMP) [28] can be used.

3.4. Combinational/sublinear algorithms

This class of methods uses group testing to recover sparse signals. They are faster and more efficient than convex relaxation or greedy algorithms, but they require a specific pattern in the data, which must be sparse. Fourier Sampling Process, Chaining Pursuit is an iterative algorithm, and Heavy Hitters on Steroids are examples of algorithms (HHS) [29, 30, 31].

3.5. Iterating threshold algorithms

Convex optimization problems are slower than iterative approaches to the CS recovery problem deals with threshold functions. The thresholding function is determined by the number of iterations and the problem setup [32, 33]. The Iterative Hard Thresholding (IHT) algorithm can provide a theoretical guarantee with its implementation [27, 34]. The primary idea behind IHT is to find a good candidate for estimating the support set that will be used in the measurement. The IHT algorithm is a straightforward algorithm with a straightforward implementation in Message Passing (MP) algorithm which gives the graph like n nodes (variables) in one side and m nodes in another side [21]. This distributed technique has a number of advantages; including low computational cost and ease of parallel or distributed implementation in are some of the types of iterative thresholds algorithms [35, 36].

3.6. Bregman iterative algorithm

The Bregman method introduces a novel concept: iteratively solving a succession of limited subproblems generated by a Bregman iterative regularization scheme yields accurate solutions to constrained problems. The iterative strategy using Bregman distance regularisation attains reconstruction from four to six iterations when applied to CS issues [37].

4. Convex relaxation algorithm (LASSO)

(i) Basic pursuit, (ii) basic pursuit denoising, (iii) LASSO and (iv) e angle regressions (LARS) are some of the convex recovery algorithms used in compressive sensing [38]. Pursuit can be used to decompose a signal into an optimal superposition of dictionary items in a basic manner. The least l1 norm of coefficients among all such decompositions was defined as the optimal.

Basic pursuit can apply to large scale optimizations issue because of modern linear and quadratic programming. Because this approach is more complicated and time-consuming, it cannot be used in time-sensitive reconstruction applications. When the sparsity level rises, accuracy drops, and you obtain a roughly correct signal, which leads to higher signal quality. Otherwise, signal quality drops.

Least Absolute Shirnkage and Selection Operator was introduced by Tibshirani (LAASO) [38]. It is a widely used sparse modelling technique that originated in the field of statistics. In this communication the authors try to exploit Lasso(l1) algorithm for spectrum sensing of wideband 5G networks which is useful for wide variety of models. This algorithm may permit large data set application by exploiting sparsity to achieve more computational and statistical gains. Lasso is an interesting algorithm carried out in many areas such as engineering and technology, modern mathematics, computation chemistry, space science, computer science and etc.,

  • (I)

    System Model

Cognitive Radio system is considered to sense ‘n’ number of Primary User Channels. There are only ‘k’ active Primary users (k << n). k is a unknown and varies with time and the remaining (n-k) bands are idle and utilized by the secondary CR users. The sensing matrix and recovery algorithm are the important testing part of compressive sensing design. At the secondary user side, sensing matrix is generated according to gaussian distribution with mean zero and variance one. The number of measurements is calculated based on the following expressions (equn.1),

m=(O(k0log(nk0)) (1)

To achieve robust recovery, random sensing matrix is generated. But recovery is slower than Circulant/Toeplitz Matrices. From the various recovery algorithms, the famous one for compressive sensing is convex optimization algorithm. l1 minimization recovery scheme (equn 2 and 3) is given by

P1minimizexx1l1 (2)
SubjecttoAxyl2ε (3)

Here, ε is a user defined parameter chosen such that ηl2ε. This formulation is known as LASSO.

Primary user system is considered as n = 256 bands. MATLAB CVX is used for solving the optimization problem. Here, number of measurements is taken as m = 27 from given sparsity level.

Figure 3 shows the plot of sparsity level to minimum number of samples required for compressive sensing. The theoretical simulations are carried out from the following equation [39].

Ms>1.39Klog(nK+0.5)+2.26 (4)
Ms1.2snzlog(nsnz+0.5) (5)

Figure 3.

Figure 3

Sparsiy level Vs minimum number of samples.

Here, Ms – minimum number of samples, k-sparsity level and n – total number of samples in Eq. (4). snz is the sparsity order derived from Nyquist sampling rate in two step Compressed Spectrum Sensing respectively. Curve fitting is the act of creating a curve, or mathematical function, which best fits a set of data points, potentially with limits. And this curve fitting method is used to find the closed form expression from the given data. From the simulation graph, it is found that the expression obtained from Eq. (4) efficiently uses small number samples compared to Eq. (5).

Figure 4 shows the simulated output of mean square error for various signal to noise ratio(SNR) values. From the graph, it is found that the mean square is almost constant for given SNR values. Although, its approaches well and good agreement with recent reports [39, 40].

Figure 4.

Figure 4

Signal to Noise Ratio Vs Mean square value.

Further, Figure 5 shows the simulated output of Mean Square Error with Sensing SNR where sensing SNR is defined in Eqs. (6), (7), and (8) as follows,

SNR=Axl22ηl22 (6)

where,

Axl22=(Ax)TAx (7)

and

ηl22=ηTη (8)

Figure 5.

Figure 5

Sensing SNR vs. Mean Square Error.

It is found from Hastie groups that compared to existing recovery algorithm like OMP CoSaMP the performance of LASSO is better [41].

5. Conclusion

In this theoretical approach of compressive sensing sub-sample consists of a signal, which is subsequently rebuilt using an optimization-based reconstruction technique. Also highlighted is the application of compressed sensing theory to exploit sparsity in critical 5G methods. LASSO is main tool for reconstruction and rebuilds the signal. LASSO method of convex optimization is simulated for various Sensing SNR and the mean square error values were plotted. The mean square is almost constant for given SNR values. Here, we can see that the algorithms used for the signal reconstruction to find the solution of systems of linear equations using sparseness conditions. The relative merit and demerit are discussed for various algorithms. In future, various algorithms can be used to construct compressed spectrum sensing in wideband cognitive radio networks. In future, Deep learning based algorithms can be used to construct compressed spectrum sensing in wideband cognitive radio networks.

Declarations

Author contribution statement

RS Koteeswari: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.

B Malarkodi: Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.

Funding statement

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Data availability statement

Data will be made available on request.

Declaration of interest’s statement

The authors declare no conflict of interest.

Additional information

No additional information is available for this paper.

References

  • 1.Suriya M., Sumithra M.G. 2021. EAI Endorsed Transactions on Energy Web Online First. [Google Scholar]
  • 2.Tianheng X., Ting Z., Jinfeng T., Jian S., Honglin H. Intelligent spectrum sensing: when reinforcement learning meets automatic repeat sensing in 5G communications. IEEE Wireless Commun. 2020;27(1):46–53. [Google Scholar]
  • 3.Ghasemi G., Sousa E.S. Spectrum sensing in cognitive radio networks: requirements, challenges and design trade-offs. IEEE Commun. Mag. 2008;46(4):32–39. [Google Scholar]
  • 4.Hong G.S., Arumugam N., Yunfei C. Wideband spectrum sensing for cognitive radio networks: a survey. IEEE Wireless Commun. 2013;20(2):74–81. [Google Scholar]
  • 5.Dagne D.T., Fante K.A., Desta G.A. Compressive sensing based maximum-minimum subband energy detection for cognitive radios. Heliyon. 2020;6(9) doi: 10.1016/j.heliyon.2020.e04906. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Yldar Y.C. Cambridge University Press; 2015. Sampling Theory: beyond Bandlimited Systems. [Google Scholar]
  • 7.Hsu C.C., Lin C.H., Kao C.H., Lin Y.C. DCSN: Deep compressed sensing network for efficient hyperspectral data transmission of miniaturized satellite. IEEE Trans. Geosci. Rem. Sens. 2021;59(9):7773–7789. [Google Scholar]
  • 8.Zhen G., Linglong D., Shuangfeng H., Chih-Lin I., Zhaocheng W., Hanzo L. Compressive sensing techniques for next-generation wireless communication. IEEE Wireless Commun. 2018;56(4):211–217. [Google Scholar]
  • 9.Amalladinne V.K., Ebert J.R., Chamberland J.F., Narayanan K.R. An enhanced decoding algorithm for coded compressed sensing with applications to unsourced random access. Sensors. 2022;22:676–689. doi: 10.3390/s22020676. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Li B., Li S.H., Nallanathan A., Zhao C.L. Deep sensing for future spectrum and location awareness 5G Communications. IEEE J. Sel. Area. Commun. 2015;33(7):1331–1344. [Google Scholar]
  • 11.Deng M., Jie Hu B., Li X.H. Adaptive weighted sensing with simultaneous tranmission for dynamic primary user traffic. IEEE Trans. Commun. 2017;65(3):992–1004. [Google Scholar]
  • 12.Ibrahim A.H., Kumam P., Abubakar A.B., Jirakitpuwapat W., Abubakar J. A hybrid conjugate gradient algorithm for constrained monotone equations with application in compressive sensing. Heliyon. 2020;6(3) doi: 10.1016/j.heliyon.2020.e03466. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Li B., Li S.H., Nallanathan A., Zhao C.L. Deep sensing for space-time doubly selective channels: when a primary user is mobile and the channel is flat Rayleigh fading. IEEE Trans. Signal Process. 2016;64(13):3362–3375. [Google Scholar]
  • 14.Tian heng X., Honglin H.U., Mengying Z. Sliced sensing system:toward 5G cognitive radio applications under fast time varying channels. IEEE Syst. J. 2019;13(2):1297–1307. [Google Scholar]
  • 15.Liu J., Jiang X., Tian X., Mallick M., Huang K., Ma C. Hybrid particle filter based dynamic compressed sensing for signal-level multitarget tracking. IEEE Access. 2020;8:17134–17148. [Google Scholar]
  • 16.Candes E.J., Recht B. Exact matrix completion via convex optimization. Found. Comput. Math. 2009;9(6):717–772. [Google Scholar]
  • 17.Murray J., Kreutz-Delgado K. An improved FOCUSS-based learning algorithm for solving sparse linear inverse problems. Conference Record of Thirty-Fifth Asilo- mar Conference on Signals. Syst. Comp. (Cat.No.01CH37256) 2001;1:347–351. [Google Scholar]
  • 18.Chartrand R., Yin W. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Acoustics, Speech and Signal Processing. ICASSP; 2008. Iteratively reweighted algorithms for compressive sens- ing; pp. 3869–3872. IEEE International Conference on, Las Vegas, NV. [Google Scholar]
  • 19.Wipf D.P., Rao B.D. Sparse Bayesian learning for basis selection. IEEE Trans. Signal Process. 2004;52(8):2153–2164. [Google Scholar]
  • 20.Godsill S.J., Cemgil A.T., Fevotte C., Wolfe P.J. European Signal Processing Con- Ference. 2007. Bayesian computational methods for sparse audio and music processing; pp. 345–349. [Google Scholar]
  • 21.Donoho D.L., Tsaig Y., Drori I., Starck J.L. Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit. IEEE Trans. Inf. Theor. 2012;58(2):1094–1121. [Google Scholar]
  • 22.Du L., Wang R., Wan W., Yu X.Q., Yu S. 2012 International Conference on Audio, Language and Image Processing. 2012. Analysis on greedy reconstruction algorithms based on compressed sensing; pp. 783–789. [Google Scholar]
  • 23.Mallat S.G., Zhang Z. Matching pursuits with time-frequency dic-tionaries. IEEE Trans. Signal Process. 1993;41(12):3397–3415. [Google Scholar]
  • 24.Tropp J.A., Gilbert A.C. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theor. 2007;53(12):4655–4666. [Google Scholar]
  • 25.Needell D., Vershynin R. Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit. Found. Comput. Math. 2009;9(3):317–334. [Google Scholar]
  • 26.Dai W., Milenkovic O. Subspace pursuit for compressive sensing signal recon- struction. IEEE Trans. Inf. Theor. 2009;55(5):2230–2249. [Google Scholar]
  • 27.Figueiredo M.A.T., Nowak R.D., Wright S.J. Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Select. Topic. Signal Proc. 2007;1(4):586–597. [Google Scholar]
  • 28.Liu E., Temlyakov V.N. The orthogonal super greedy algorithm and applications in compressed sensing. IEEE Trans. Inf. Theor. 2012;58(4):2040–2047. [Google Scholar]
  • 29.Gilbert A.C., Muthukrishnan S., Strauss M. In: Improved Time Bounds for Near- Optimal Sparse Fourier Representations. Papadakis M., Laine A.F., Unser, editors. 2005. [Google Scholar]
  • 30.Gilbert A.C., Strauss M.J., Tropp J.A., Vershynin R. 44th Annual Allerton Conference on Communication, Control, and Computing. Allerton; 2006. Algorithmic linear dimension reduction in the l 1 norm for sparse vectors. [Google Scholar]
  • 31.Gilbert A.C., Strauss M.J., Tropp J.A., Vershynin R. vol. 7. 2007. One sketch for all: fast algorithms for compressed sensing; pp. 237–246. (Proceedings of the thirty-ninth annual ACM symposium on Theory of computing - STOC). [Google Scholar]
  • 32.Donoho D.L. De-noising by soft-thresholding. IEEE Trans. Inf. Theor. 1995;41(3):613–627. [Google Scholar]
  • 33.Blumensath T., Davies M.E. Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 2009;27(3):265–274. [Google Scholar]
  • 34.Foucart S. Approximation Theory XIII: San Antonio. 2010. Sparse recovery algorithms: su_cient conditions in terms of Re- stricted isometry constants; pp. 65–77. [Google Scholar]
  • 35.Berinde R., Indyk P., Ruzic M. 46th Annual Allerton Conference on Communication, Control, and Computing. 2008. Practical near-optimal sparse recovery in the L1 norm; pp. 198–205. [Google Scholar]
  • 36.Berinde R., Indyk P. 2009 47th annual Allerton Conference on Communication, Control, and Computing. IEEE; Monticello, IL: 2009. Sequential sparse matching pursuit; pp. 36–43. Allerton. [Google Scholar]
  • 37.Yin W., Osher S., Goldfarb D., Darbon J. Bregman iterative algorithms for $ 1$-minimization with applications to compressed sensing. SIAM J. Imag. Sci. 2008;1(1):143–168. [Google Scholar]
  • 38.Tibshirani R. Regression shrinkage and selection via the Lasso. J. Roy. Stat. Soc. B. 1996;58:267–288. [Google Scholar]
  • 39.Shahzadi A. Adaptive data driven wideband compressive spectrum sensing for cognitive radio networks. J. Commun. Inform. Netw. 2018;3 [Google Scholar]
  • 40.Khalfi B., Hamdaoui B., Guizani M., Zorba N. IEEE Conference on Computer Communications Workshop. 2017. Exploiting wideband spectrum occupancy heterogeneity for weighted compressive spectrum sensing. [Google Scholar]
  • 41.Hastie T.J., Tibshirani R., Tibshirani R.J. Extended comparisons of best subset selection, forward stepwise selection, and the Lasso. arXiv: Methodology. 2017 [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Data will be made available on request.


Articles from Heliyon are provided here courtesy of Elsevier

RESOURCES