Skip to main content
Entropy logoLink to Entropy
. 2021 Dec 16;23(12):1688. doi: 10.3390/e23121688

Inequalities for Jensen–Sharma–Mittal and Jeffreys–Sharma–Mittal Type f–Divergences

Paweł A Kluza 1
Editors: Boris Ryabko1, Takuya Yamano1
PMCID: PMC8700545  PMID: 34945994

Abstract

In this paper, we introduce new divergences called Jensen–Sharma–Mittal and Jeffreys–Sharma–Mittal in relation to convex functions. Some theorems, which give the lower and upper bounds for two new introduced divergences, are provided. The obtained results imply some new inequalities corresponding to known divergences. Some examples, which show that these are the generalizations of Rényi, Tsallis, and Kullback–Leibler types of divergences, are provided in order to show a few applications of new divergences.

Keywords: convex function, Csiszár f-divergence, Sharma–Mittal f-divergence, Jensen–Sharma–Mittal divergence, Jeffreys–Sharma–Mittal divergence

MSC: 94A17, 26D15, 15B51

1. Introduction

The Sharma–Mittal entropy was introduced as a new measure of information with two parameters [1]. It has previously been studied in the context of multi-dimensional harmonic oscillator systems [2]. This entropy could also be formulated in the form of exponential families, to which many usual statistical distributions including the Gaussians and discrete multinomials (that is, normalized histograms) belong. In physical applications it plays a major role in the field of thermo-statistics [3].

The Sharma–Mittal entropy is also applied for the analysis of the results of machine learning methods [4,5]. Additionally, the divergence based on considered entropy could be a cost function in the context of so-called the Twin Gaussian Processes [6].

It was originally showed by [7] that the Sharma–Mittal entropy generalized both Tsallis and Rényi entropy in the limiting cases of these two entropies. In [8], authors suggested a physical meaning of Sharma–Mittal entropy, which is the free energy difference between the equilibrium and the off-equilibrium distribution.

Recently, was published a manuscript showing, in opposition to the work [8], that Sharma–Mittal entropy besides the convenient thermodynamic systems does not reduce only to Kullback–Leibler entropy. In [9] Verma and Merigó present the use of Sharma–Mittal entropy under intuitionistic fuzzy environment. Additionally, in [5] Koltcov et al. demonstrate that Sharma–Mittal entropy is a tool for selecting both the number of topics and the values of hyper-parameters, simultaneously controlling for semantic stability, which none of the existing metrics can do.

Another applications of considered entropy are interesting results in the cosmological setup, such as black hole thermodynamics [10]. Namely, it helps us to describe the current accelerated universe by using the vacuum energy in a suitable manner [11]. In addition [12] have established the relation between anomalous diffusion process and Sharma–Mittal entropy.

This paper is based on publications in which we introduced new types of f-divergences [13,14,15,16].

In this paper we generalize Sharma–Mittal types divergences in order to obtain new types of divergences and hence the inequalities from which it will be possible to derive new results and generalizations for known divergences in order to estimate the lower and upper bounds which determine the level of the uncertainty measure.

2. Sharma–Mittal Type Divergences

Throughout R+ and R++ denote the sets of non-negative and positive numbers, respectively, i.e., R+=[0,) and R++=(0,).

Let p=(p1,,pn) and q=(q1,,qn) with pi,qi0, i=1,,n. The relative entropy (also called Kullback–Leibler divergence) is defined by (see [17])

H1(p,q)=i=1npilogpiqi. (1)

In the above definition, based on continuity arguments, we use a convention that 0log(0/q)=0 and plog(p/0)=+. Additionally 0log(0/0)=0.

Let f: R+R be a convex function on R+, and p=(p1,,pn)R++n, q=(q1,,qn)R+n.

The Csiszár f-divergence is defined by (see [15])

Cfp,q=i=1npifqipi. (2)

with the conventions 0f00=0 and 0fc0=climtf(t)t, c>0 (see [18,19,20]).

The Tsallis divergence of order α is defined by (see [17])

Tα(p,q)=1α1i=1npiαqi1α1.

The Rényi divergence of order α is defined by (see [17,21])

Hα(p,q)=1α1logi=1npiαqi1α.

The Sharma–Mittal divergence of order α and degree β is defined by (see [4])

SMα,β(p,q)=1β1i=1npiαqi1α1β1α1, (3)

for all α>0, α1 and β1.

Let g:IR be a convex function on an interval IR. Let x=(x1,,xn)In and pi[0,1) for i=1,n.

The Jensen’s inequality is as follows (see [22])

gi=1npixii=1npig(xi). (4)

When the function k:R+R is convex and the function l:R+R is convex and increasing then the composition of the functions kl:R+R is convex. We assume that the probabilities pi0 and qi>0 for i=1,,n.

It is known (see [4]) that if

α=β1thenSMα,β(p,q)H1(p,q),
β1andαRthenSMα,β(p,q)Hα(p,q),
β=αthenSMα,β(p,q)Tα(p,q).

Let h:RR be the differentiable function. Then the Sharma–Mittal h-divergence is defined as follows:

SMh,α,β(p,q)=hi=1npiαqi1α1β1α1β1, (5)

for all α>0, α1 and β1.

If we assume that h=id then (5) becomes Sharma–Mittal divergence.

When for all t>0, h(t)=log(te) then (5) becomes Rényi divergence of order α.

We substitute for t=i=1npiαqi1α1β1α and we have

h(t)=hi=1npiαqi1α1β1α=logi=1npiαqi1α1β1αe=logi=1npiαqi1α1β1α+1.

Hence, from (5)

SMh,α,β(p,q)=logi=1npiαqi1α1β1αβ1=1α1logi=1npiαqi1α=Hα(p,q).

Let Ψ:R+×R+R be a differentiable function with respect to β and

Ψ(α,β)=hi=1npiαqi1α1β1α.

We assume that h(1)=1 and Ψ(α,1)=1. Then,

limβ1Ψ(α,β)Ψ(α,1)β1=Ψ(α,1)=hi=1npiαqi1α1β1α|β=1=
β|β=1hi=1npiαqi1α1β1αβ|β=1i=1npiαqi1α1β1αβ|β=11β1α=
h(1)i=1npiαqi1α111αlogi=1npiαqi1α11α=
h(1)1α1logi=1npiαqi1α=1α1logi=1npiαqi1α.

Hence, the Sharma–Mittal h–divergence tends to Rényi divergence of order α.

Remark 1.

If, additionally, α tends to 1 then based on the proof of the Equation (11) from [16], Sharma–Mittal h-divergence tends to relative entropy (called Kullback–Leibler divergence).

Now we define a new generalized (h,ϕ) Sharma–Mittal divergence as follows

SMh,ϕ,α,β(p,q)=hi=1nqiϕpiqi,α1β1α1β1, (6)

where ϕ:(0,+)×R+R+ is an increasing, non-negative and differentiable function for β>1.

We assume that F={fα:(0,)R:αR} is a given family of functions such that i=1nqifα|α=1piqi=1 for α=1 and which are increasing, non-negative for α>1 and such that for every t(0,+) the function αfα(t) is differentiable.

According to [16] if we substitute the function fα(piqi) from the family F for ϕ(piqi,α) then it stands that

limβ1SMh,ϕ,α,β(p,q)=Rh,fα(p,q).

We assume that h(1)=1. Then,

limβ1SMh,ϕ,α,β(p,q)=limβ1hi=1nqiϕpiqi,α1β1α1β1=
hi=1nqiϕpiqi,α0·i=1nqiϕpiqi,α0logi=1nqiϕpiqi,α11α=
h(1)1α1logi=1nqiϕpiqi,α=1α1logi=1nqiϕpiqi,α=
1α1logi=1nqifαpiqi=Rh,fα(p,q). (7)

Remark 2.

If in (6) β1 and ϕ(piqi,α)=fα(piqi) then the generalized (h,ϕ)–Sharma–Mittal divergence tends to generalized (h,F)–Rényi divergence.

The function ϕ is the generalization of the function fα(piqi) which is used for example in Csiszár f-divergence. Condition β1 means that the limit of generalized Sharma–Mittal divergence is equal to generalized (h,F)–Rényi divergence. Hence we have implications for generalized forms of entropies.

Remark 3.

Additionally, when in (6) α1 and ϕ(piqi,α)=piqiα then the generalized (h,ϕ)–Sharma–Mittal divergence tends to Kullback–Leibler divergence, because we have from Remark 2

lim(α,β)(1,1)SMh,ϕ,α,β(p,q)=limα1logi=1nqiϕpiqi,αlog1α1=α|α=1logi=1nqipiqiα=1i=1npii=1nqipiqilogpiqi=i=1npilogpiqi=H1(p,q).

Remark 4.

In (6), when the parameter β=α, the function h=id and ϕ(piqi,α)=piqiα then the generalized (h,ϕ)–Sharma–Mittal divergence tends to the Tsallis f-divergence or order α.

This work is more theoretical than practical. Therefore, the implications are formulated in the mathematical area that is from constructing general model which gives known specific cases.

3. Jensen–Sharma–Mittal and Jeffreys–Sharma–Mittal Divergences

The Jensen–Shannon divergence (Jensen–Shannon entropy) is defined as follows (see [17]):

Jen(p,q)=12SMp,p+q2+12SMq,p+q2.

The Jeffreys divergence (Jeffreys entropy) is defined as follows (see [17]):

Jef(p,q)=SM(p,q)+SM(q,p).

We introduce a new generalized (h,ϕ) Jensen–Sharma–Mittal divergence defined by

JenSMα,βh,ϕ(p,q)=12SMh,ϕ,α,βp,p+q2+12SMh,ϕ,α,βq,p+q2 (8)

with assumptions as before.

We similarly introduce a new generalized (h,ϕ) Jeffreys–Sharma–Mittal divergence as follows

JefSMα,βh,ϕ(p,q)=SMh,ϕ,α,β(p,q)+SMh,ϕ,α,β(q,p). (9)

Taking into account inequality from [17]:

0Jen(p,q)12Jef(p,q),

describing the relation between the Jensen–Shannon and Jeffreys divergences, we could formulate the following:

0JenSMα,βh,ϕ(p,q)12JefSMα,βh,ϕ(p,q). (10)

We define the Jensen–Sharma–Mittal h-divergence where, in (8), ϕpiqi,α=piqiα. Then, it takes the form:

JenSMα,βh(p,q)=12SMh,α,βp,p+q2+12SMh,α,βq,p+q2 (11)

In the same way, we define the Jeffreys–Sharma–Mittal h–divergence:

JefSMα,βh(p,q)=SMh,α,β(p,q)+SMh,α,β(q,p). (12)

Additionally, if the function h(t)=t then we define the Jensen–Sharma–Mittal and the Jeffreys–Sharma–Mittal divergences of order α and degree β, respectively.

JenSMα,β(p,q)=12SMα,βp,p+q2+12SMα,βq,p+q2, (13)
JefSMα,β(p,q)=SMα,β(p,q)+SMα,β(q,p). (14)

When in (8) and (9) β1 and we substitute for ϕpiqi,α=fαpiqi then we obtain, defined in [16], the generalized (h,F) Jensen–Rényi and Jeffreys–Rényi divergences, respectively:

JenSMα,1h,fα(p,q)=JenRh,fα(p,q)=12Rh,fαp,p+q2+12Rh,fαq,p+q2,
JefSMα,1h,fα(p,q)=JefRh,fα(p,q)=Rh,fα(p,q)+Rh,fα(q,p). (15)

The following theorem is the generalization and refinement of the inequalities for some known divergences and provides lower and upper bounds for the generalized (h,ϕ) Jeffreys–Sharma–Mittal divergence in order to a more accurate estimation of its uncertainty measure.

Theorem 1.

Let p=(p1,,pn) and q=(q1,,qn) be two discrete probability distributions with pi>0, qi>0, qipiI0, piqiI0, i=1,,n, where I0R is an interval, such that 1I0. Let ϕ:I0×R+R+ be an increasing, non-negative and differentiable function for which i=1nqiϕpiqi,α1 and i=1npiϕqipi,α1 where 1<βα, αR+{1} and h:I0R be a convex and increasing function on I0.

Then, the following inequalities are valid:

1α1logi=1nϕpiqi,αqiϕqipi,αpiJefSMα,βh,ϕ(p,q)
1β1JefChϕ(p,q)2. (16)

Proof. 

Taking into account the assumptions, we could formulate the following inequality:

i=1nqiϕpiqi,α1β1αi=1nqiϕpiqi,α. (17)

The function h is increasing and convex, therefore, from (4) and (17) we obtain inequalities:

hi=1nqiϕpiqi,α1β1αhi=1nqiϕpiqi,αi=1nqi(hϕ)piqi,α. (18)

In the same way, we obtain the following inequalities:

hi=1npiϕqipi,α1β1αhi=1npiϕqipi,αi=1npi(hϕ)qipi,α. (19)

From (9) we have

JefSMα,βh,ϕ(p,q)=SMh,ϕ,α,β(p,q)+SMh,ϕ,α,β(q,p)=
hi=1nqiϕpiqi,α1β1α1β1+hi=1npiϕqipi,α1β1α1β1

Taking into account (2), (18), (19) and the definition of Jeffreys divergence, it stands that:

JefSMα,βh,ϕ(p,q)i=1nqi(hϕ)piqi,α1β1+i=1npi(hϕ)qipi,α1β1=
i=1nqi(hϕ)piqi,α+i=1npi(hϕ)qipi,α2β1=JefChϕ(p,q)2β1. (20)

The above inequality is the upper bound for generalized (h,ϕ) Jeffreys–Sharma–Mittal divergence.

By using the convexity of the function h with h(1)=1 the following inequality is valid for β>1:

hi=1nqiϕpiqi,α1β1α1β1β|β=1hi=1nqiϕpiqi,α1β1α. (21)

From (7) the above derivative function is equal to: 1α1logi=1nqiϕpiqi,α.

The function f(t)=logt is concave and increasing. Then, it stands that:

1α1logi=1nqiϕpiqi,α1α1i=1nqilogϕpiqi,α. (22)

Hence, from (21) and (22) we have the inequality:

hi=1nqiϕpiqi,α1β1α1β11α1i=1nqilogϕpiqi,α. (23)

Similarly, we obtain the second inequality:

hi=1npiϕqipi,α1β1α1β11α1i=1npilogϕqipi,α. (24)

We have from (6), (23) and (24) that:

SMh,ϕ,α,β(p,q)1α1i=1nqilogϕpiqi,α,
SMh,ϕ,α,β(q,p)1α1i=1npilogϕqipi,α.

Then, by using the definition (9) we have:

JefSMα,βh,ϕ(p,q)1α1logi=1nϕpiqi,αqi+logi=1nϕqipi,αpi=
1α1logi=1nϕpiqi,αqiϕqipi,αpi. (25)

This result is the lower bound of the generalized (h,ϕ) Jeffreys–Sharma–Mittal divergence.

Combining (20) and (25) we obtain the expected inequalities (16). □

Corollary 1.

When we substitute for ϕpiqi,α=piqiα then from (16) we obtain the inequalities for Jeffreys–Sharma–Mittal h-divergence:

1α1logi=1npiqiα(qipi)JefSMα,βh(p,q)
i=1nqihpiqiα+i=1npihqipiα2β1.

We now formulate the theorem thanks to which the estimation of the generalized (h,ϕ) Jensen–Sharma–Mittal divergence will be possible.

Theorem 2.

Let p=(p1,,pn) and q=(q1,,qn) be two discrete probability distributions with pi>0, qi>0, qipiI0, piqiI0, i=1,,n, where I0R is an interval such that 1I0. Let ϕ:I0×R+R+ be an increasing, non-negative and differentiable function for which i=1npi+qi2ϕ2pipi+qi,α1 and i=1npi+qi2ϕ2qipi+qi,α1 where 1<βα, αR+{1} and h:I0R be a convex and increasing function on I0.

Then, the following inequalities are valid:

12(α1)logi=1nϕ2pipi+qi,αϕ2qipi+qi,αpi+qi2JenSMα,βh,ϕ(p,q)
i=1n(pi+qi)(hϕ)2pipi+qi,α+(hϕ)2qipi+qi,α44(β1) (26)

Proof. 

Let’s consider the function

hi=1npi+qi2ϕpipi+qi2,α1β1α1β1. (27)

Using the assumptions that the function h is differentiable, convex and h(1)=1 we could formulate the following inequality:

hi=1npi+qi2ϕpipi+qi2,α1β1α1β1β|β=1hi=1npi+qi2ϕpipi+qi2,α1β1α. (28)

Then, (28) is equal to:

logi=1npi+qi2ϕpipi+qi2,α11α=1α1logi=1npi+qi2ϕpipi+qi2,α.

Taking into account concavity of the function log, we have that:

1α1logi=1npi+qi2ϕpipi+qi2,α1α1i=1npi+qi2logϕpipi+qi2,α.

Then, we obtain that (27) is greater than

1α1logi=1nϕpipi+qi2,αpi+qi2. (29)

We do the same with the function

hi=1npi+qi2ϕqipi+qi2,α1β1α1β1. (30)

Hence, we have that (30) is greater than

1α1logi=1nϕqipi+qi2,αpi+qi2. (31)

Then, combining (27), (29)–(31), and using the definition (8) the following inequality occurs

JenSMα,βh,ϕ(p,q)12(α1)logi=1nϕ2pipi+qi,αϕ2qipi+qi,αpi+qi2 (32)

and it is the lower bound of the generalized (h,ϕ) Jensen–Sharma–Mittal divergence.

When we consider the function

hi=1npi+qi2ϕpipi+qi2,α1β1α (33)

with 1<βα then for the convex and increasing function h we have from (4) that (33) is smaller than

hi=1npi+qi2ϕpipi+qi2,αi=1npi+qi2(hϕ)pipi+qi2,α. (34)

In a similar way we conclude the following inequality for the function

hi=1npi+qi2ϕqipi+qi2,α1β1α (35)

and we have

hi=1npi+qi2ϕqipi+qi2,αi=1npi+qi2(hϕ)qipi+qi2,α. (36)

Then combining (33)–(36) and the definition (8) with the proper transformations we obtain the inequality

JenSMα,βh,ϕ(p,q)i=1n(pi+qi)(hϕ)2pipi+qi,α+(hϕ)2qipi+qi,α44(β1) (37)

which is the upper bound of the generalized (h,ϕ) Jensen–Sharma–Mittal divergence.

When we take into account (32) and (37), then we obtain (26). □

Corollary 2.

When we substitute for ϕ2pipi+qi,α=2pipi+qiα and for ϕ2qipi+qi,α=2qipi+qiα then from (26) we obtain the inequalities for Jensen–Sharma–Mittal h–divergence:

12(α1)logi=1n4piqi(pi+qi)2α(pi+qi)2JenSMα,βh(p,q)
i=1n(pi+qi)h2pipi+qiα+h2qipi+qiα44(β1).

Remark 5.

It could be seen that the lower bounds for both Jeffreys (25) and Jensen (32) Sharma–Mittal (h,ϕ) divergences are independent of the function h.

Remark 6.

Taking into account the inequality (10) we obtain the alternative upper bound for the Jensen–Sharma–Mittal and the lower bound for the Jeffreys–Sharma–Mittal generalized (h,ϕ) divergences, respectively.

JenSMα,βh,ϕ(p,q)hi=1nqiϕpiqi,α1β1α+hi=1nqiϕpiqi,α1β1α22(β1),
JefSMα,βh,ϕ(p,q)hi=1npi+qi2ϕ2pipi+qi,α1β1α1β1+
hi=1npi+qi2ϕ2qipi+qi,α1β1α1β1.

4. Applications

In this section we show how our theory works.

4.1. Bounds for Sharma–Mittal Divergences

For the functions h(t)=t, ϕ(t,α)=tα and based on Theorems 1 and 3 we obtain the lower and upper bounds for Jeffreys–Sharma–Mittal and Jensen–Sharma–Mittal divergences, respectively, as follows

1α1logi=1npiqiα(qipi)JefSMα,β(p,q)i=1n(piqi)αqi12α+pi12α2β1 (38)
12(α1)logi=1n2piqipi+qiα(pi+qi)JenSMα,β(p,q)i=1npi+qi21αpiα+qiα22(β1). (39)

Remark 7.

The above lower bounds (38) and (39) are the same for Rényi types divergences because they are independent of the parameter β which in that case approaches 1.

Remark 8.

Substituting different values for the parameters α, β, such that 1<βα and taking into account the assumptions from the Theorems 1 and 3 about the functions h and ϕ we could formulate new types of divergences and related inequalities which are based on the generalized (h,ϕ) Sharma–Mittal divergence.

4.2. Bounds for Tsallis Divergences

When we make the same assumptions as for Sharma–Mittal divergences with additional that β=α we obtain the bounds for Tsallis type divergences as follows

1α1logi=1npiqiα(qipi)JefTα(p,q)i=1n(piqi)αqi12α+pi12α2α1
12(α1)logi=1n2piqipi+qiα(pi+qi)JenTα(p,q)i=1npi+qi21αpiα+qiα22(α1).

4.3. Bounds for Kullback–Leibler Divergences

When we have the same situation as in case of Tsallis divergence that is h(t)=t, ϕ(t,α)=tα, α=β and additionally both α and β approach 1 then we obtain new upper bounds for Jeffreys and Jensen–Shannon divergences, respectively.

JefS(p,q)i=1n(pi+qi)logpiqi,
JenS(p,q)i=1npilogpi+qilogqi(pi+qi)log(pi+qi).

The last inequality is equivalent to JenS(p,q)2log2.

5. Summary

In this paper, new types of entropy have been defined, which are generalizations of others known and used so far in information theory.

The manuscript deals more with issues in the field of pure mathematics, therefore the standard axioms of entropy used in thermodynamics could, in this case, be extended by other assumptions and properties.

These divergences have been introduced for new physical interpretations which could be generated.

Generalized Sharma–Mittal and consequently Jensen–Sharma–Mittal and Jeffrey–Sharma–Mittal divergences have been defined for obtaining better estimates for known entropies, which will allow to more accurately determination of the dispersion measure of different distributions.

The derived inequalities have both upper and lower limits for the considered f-divergences. As a consequence, we obtain specific estimates for some new order measures. Hence they provide much wider interpretation possibilities in comparing probability distributions in the sense of mutual distances in different spaces.

In the era of advancing quantum mechanics, scientists are striving to build a quantum computer with very high computing power. The obtained results, despite their mathematical and analytical complexity, will very quickly generate specific numerical intervals which are an estimation of new introduced entropies. Therefore, such results as in this paper will be very useful in developing information theory issues.

This work is from the area of pure mathematics, therefore it is more theoretical than practical and makes it possible to find the existing known entropies by means of new defined generalizations. These generalizations can be used for interpreting various physical phenomena. The aim of this manuscript was to provide some new theoretical solutions for physicists who, with their knowledge and experience, will be able to look for new applications.

Acknowledgments

The author wishes to thank anonymous referees for their helpful suggestions improving the readability of the paper.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Footnotes

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Sharma B.D., Mittal D.P. New nonadditive measures of inaccuracy. J. Math. Sci. 1975;10:122–133. [Google Scholar]
  • 2.Üzengi Aktürk O., Aktürk E., Tomak M. Can Sobolev inequality be written for Sharma–Mittal entropy? Int. J. Theor. Phys. 2008;47:3310–3320. doi: 10.1007/s10773-008-9766-2. [DOI] [Google Scholar]
  • 3.Naudts J. Generalised Thermostatistics. Springer; Berlin, Germany: 2011. [Google Scholar]
  • 4.Elhoseiny M., Elgammal A. Generalized Twin Gaussian Processes using Sharma–Mittal Divergence. arXiv. 2015 doi: 10.1007/s10994-015-5497-9.1409.7480v5 [DOI] [Google Scholar]
  • 5.Koltcov S., Ignatenko V., Koltsova O. Estimating Topic Modeling Performance with Sharma–Mittal Entropy. Entropy. 2019;21:660. doi: 10.3390/e21070660. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Bo L., Sminchisescu C. Twin gaussian processes for structured prediction. Int. J. Comput. Vis. 2010;87:28–52. doi: 10.1007/s11263-008-0204-y. [DOI] [Google Scholar]
  • 7.Masi M. A step beyond Tsallis and Rényi entropies. Phys. Lett. A. 2005;338:217–224. doi: 10.1016/j.physleta.2005.01.094. [DOI] [Google Scholar]
  • 8.Aktürk E., Bagci G., Sever R. Is Sharma–Mittal entropy really a step beyond Tsallis and Rényi entropies? arXiv. 2007cond-mat/0703277 [Google Scholar]
  • 9.Verma R., Merigó J.M. On Sharma–Mittal’s Entropy under Intuitionistic Fuzzy Environment. Cybern. Syst. 2021;52:498–521. doi: 10.1080/01969722.2021.1903722. [DOI] [Google Scholar]
  • 10.Ghaffari S., Ziaie A.H., Moradpour H., Asghariyan F., Feleppa F., Tavayef M. Black hole thermodynamics in Sharma–Mittal generalized entropy formalism. Gen. Relativ. Gravit. 2019;51:93. doi: 10.1007/s10714-019-2578-2. [DOI] [Google Scholar]
  • 11.Demirel E.C.G. Dark energy model in higher–dimensional FRW universe with respect to generalized entropy of Sharma and Mittal of flat FRW space–time. Can. J. Phys. 2019;97:1185–1186. doi: 10.1139/cjp-2018-0784. [DOI] [Google Scholar]
  • 12.Frank T., Daffertshofer A. Exact time–dependent solutions of the Rényi Fokker–Planck equation and the Fokker–Planck equations related to the entropies proposed by Sharma and Mittal. Phys. A Stat. Mech. Appl. 2000;285:351–366. doi: 10.1016/S0378-4371(00)00178-3. [DOI] [Google Scholar]
  • 13.Kluza P.A., Niezgoda M. Inequalities for relative operator entropies. Electron. J. Linear Algebra. 2014;27:851–864. doi: 10.13001/1081-3810.2845. [DOI] [Google Scholar]
  • 14.Kluza P.A., Niezgoda M. Generalizations of Crooks and Lin’s results on Jeffreys-Csiszár and Jensen-Csiszár f-divergences. Phys. A Stat. Mech. Appl. 2016;463:383–393. doi: 10.1016/j.physa.2016.07.062. [DOI] [Google Scholar]
  • 15.Kluza P.A., Niezgoda M. On Csiszár and Tsallis type f-divergences induced by superqudratic and convex functions. Math. Inequal. Appl. 2018;21:455–467. [Google Scholar]
  • 16.Kluza P.A. On Jensen–Rényi and Jeffreys-Rényi type f-divergences induced by convex functions. Phys. A Stat. Mech. Appl. 2020;548:1–10. doi: 10.1016/j.physa.2019.122527. [DOI] [Google Scholar]
  • 17.Crooks G.E. On Measures of Entropy and Information. [(accessed on 22 September 2018)]. Available online: http://threeplusone.com/info.
  • 18.Csiszár I. Information–type measures of differences of probability distributions and indirect observations. Stud. Sci. Math. Hung. 1967;2:299–318. [Google Scholar]
  • 19.Csiszár I., Körner J. Information Theory: Coding Theorems for Discrete Memory-Less Systems. Academic Press; New York, NY, USA: 1981. [Google Scholar]
  • 20.Dragomir S.S. Inequalities for the Csiszár f-Divergence in Information Theory. RGMIA Monographs, Victoria University. 2000. [(accessed on 1 February 2001)]. Available online: http://rgmia.org/monographs/csiszar.htm.
  • 21.Baez J.C. Rényi entropy and free energy. arXiv. 2011 doi: 10.3390/e24050706.1102.2098 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Dragomir S.S., Pecaric J.E., Persson L.E. Properties of some functionals related to Jensen’s inequality. Acta Math. Hungar. 1996;70:129–143. doi: 10.1007/BF00113918. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Not applicable.


Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES