Skip to main content
Entropy logoLink to Entropy
. 2025 Dec 8;27(12):1243. doi: 10.3390/e27121243

Mirror Descent and Exponentiated Gradient Algorithms Using Trace-Form Entropies

Andrzej Cichocki 1,2,3,4,*, Toshihisa Tanaka 3,*, Frank Nielsen 5,*, Sergio Cruces 6,*
Editors: Antonio M Scarfone, Tatsuaki Wada
PMCID: PMC12731593  PMID: 41440446

Abstract

This paper introduces a broad class of Mirror Descent (MD) and Generalized Exponentiated Gradient (GEG) algorithms derived from trace-form entropies defined via deformed logarithms. Leveraging these generalized entropies yields MD and GEG algorithms with improved convergence behavior, robustness against vanishing and exploding gradients, and inherent adaptability to non-Euclidean geometries through mirror maps. We establish deep connections between these methods and Amari’s natural gradient, revealing a unified geometric foundation for additive, multiplicative, and natural gradient updates. Focusing on the Tsallis, Kaniadakis, Sharma–Taneja–Mittal, and Kaniadakis–Lissia–Scarfone entropy families, we show that each entropy induces a distinct Riemannian metric on the parameter space, leading to GEG algorithms that preserve the natural statistical geometry. The tunable parameters of deformed logarithms enable adaptive geometric selection, providing enhanced robustness and convergence over classical Euclidean optimization. Overall, our framework unifies key first-order MD optimization methods under a single information-geometric perspective based on generalized Bregman divergences, where the choice of entropy determines the underlying metric and dual geometric structure.

Keywords: mirror descent; natural gradient; information geometry; deformed logarithms; generalized exponentiated gradient; Bregman divergences; Riemnnian optimization; (q,κ)-algebra


This paper is dedicated to Professor Shun-Ichi AMARI in honor of his 90th birthday.

1. Introduction

Mirror descent (MD), initially proposed by Nemirovsky and Yudin [1], has become an increasingly popular topic in optimization, artificial intelligence, and machine learning domains [2,3,4,5,6]. Its profound success stems not merely from its algorithmic efficiency, but also from deep mathematical connections to information geometry and the natural statistical structure underlying optimization problems. These connections, particularly to Amari’s Natural Gradient (NG) method, reveal that effective optimization is fundamentally about respecting the intrinsic geometry of the parameter space rather than imposing artificial Euclidean constraints [7,8,9].

The central motivation for our research emerges from a fundamental insight in information geometry: optimized learning algorithms should adapt to the Fisher information metric of the underlying statistical manifold. This principle, pioneered by Amari in the context of neural networks, establishes that the steepest descent direction on a statistical manifold is not the Euclidean gradient, but rather the natural gradient, the direction that accounts for the curvature induced by the Fisher information matrix [10,11].

1.1. The Information Geometry Perspective

The connection between Mirror Descent and Natural Gradient runs deeper than algorithmic similarity—it represents a fundamental mathematical equivalence that has been rigorously established [2,12,13]. Our work extends this principle by showing that trace-form entropies induce natural Fisher—like metrics that can be even more appropriate for specific problem structures. Through deformed logarithms, we can systematically explore the space of possible geometries and the possibility of discovering optimal choices of hyperparameters for given data distributions [14,15].

1.2. Challenge of Geometric Selection

While the power of geometric optimization is well established, a fundamental challenge remains: how to select the appropriate geometry for a given optimization problem? Classical approaches require domain expertise and manual tuning, limiting their applicability. Our approach addresses this through parameterized entropy families, where hyperparameters control the geometric structure [2].

The theoretical foundation of our approach rests on the connection between Bregman divergences and exponential families established in information geometry [8]. This connection indicates that choosing a generalized entropy is equivalent to selecting an appropriate exponential family structure for specific optimization problems. See also [16] for an extension of the logarithmic divergences extending to Bregman divergences. Our deformed logarithms allow us to systematically explore the space of possible exponential families, enabling discovery of optimal statistical models.

The Exponentiated Gradient (EG) and its extensions emerge as a specific and powerful instantiation of the Mirror Descent framework when the mirror map is constructed from generalized entropies and deformed logarithms. This connection is far from superficial—it represents a fundamental mathematical relationship that unifies additive and multiplicative gradient updates within a single theoretical framework [15,17,18,19,20,21].

It is important to note that Mirror Descent updates can be reparameterized as Gradient Descent in appropriately chosen coordinate systems [2,12]. This reveals that the computational complexity of Natural Gradient can be considerably reduced. Our deformed logarithms approach gives some insight by showing that deformed logarithms naturally induce reparameterizations that preserve geometric structure while enabling efficient computation and provide implicit regularization through the choice of entropy or deformed logarithm.

1.3. Research Contributions and Scope

In this work, we systematically investigate the theoretical foundations and practical implications of employing trace-form entropies and deformed logarithms in the Mirror Descent framework.

Our primary contributions include:

  • Mathematical Framework: We establish a comprehensive mathematical foundation connecting generalized entropies, deformed logarithms, and Mirror Descent updates, providing explicit formulations for numerous well-established trace entropy families.

  • Algorithmic Innovations: We derive novel Generalized Exponentiated Gradient (GEG) algorithms with generalized multiplicative updates that leverage the flexibility of hyperparameter-controlled deformed logarithms, enabling adaptation to problem geometry.

  • The significance of this research extends beyond algorithmic development; it opens new avenues for understanding the geometric foundations of optimization and provides practical tools for addressing increasingly complex machine learning challenges. The unifying theoretical framework connects optimization theory, information geometry, statistical physics, and practical machine learning, opens up new research directions, and provides principled approaches to algorithm design that respect the natural geometric structure of optimization problems.

2. Preliminaries: Mirror Descent (MD) and Standard Exponentiated Gradient (EG) Updates

Notations: Vectors are denoted by boldface lowercase letters, e.g., wRN, where for any vector w, we denote its i-th entry by wi. For any vectors w,vRN, we define the Hadamard product as wv=[w1v1,,wNvN]T and wα=[w1α,,wNα]T. All operations for vectors like multiplications and additions are performed componentwise. The function of a vector is also applied for any entry of the vectors, e.g., f(w)=[f(w1),f(w2),,f(wN)]T. The N-dimensional real vector space with nonnegative real numbers is denoted by R+N. We let w(t) denote the weight or parameter vector as a function of time t. The learning process advances in iterative steps, where during step t we start with the weight vector w(t)=wt and update it to a new vector w(t+1)=wt+1. We define [x]+=max{0,x}, and the gradient of a differentiable cost function as wL(w)=L(w)/w=[L(w)/w1,,L(w)/wN]T. In contrast to deformed logarithms defined later, the classical natural logarithm will be denoted by ln(x).

2.1. Problem Statement

We consider the constrained optimization problem:

wt+1=argminwR+NL(w)+1ηDF(w||wt), (1)

where L(w) is a continuously differentiable loss function, η>0 is the learning rate and DF(w||wt) is the Bregman divergence induced by a strictly convex generating function F(w) (used here as a regularizer) [2,22]. The Bregman divergence provides the geometric foundation for Mirror Descent algorithms:

DF(w||wt)=F(w)F(wt)(wwt)Tf(wt), (2)

where the generating (or potential) function F(w) is a continuously-differentiable, strictly convex function defined on the convex domain, while f(w)=wF(w) is the mirror map, called also the link function, which is a strictly monotonically increasing function. For fundamental properties and for some extensions, see, e.g., [23,24,25,26].

The Bregman divergence measures the difference between F(w) and its first-order Taylor approximation around wt, providing a natural measure of geometric proximity that respects the curvature induced by F. This geometric structure is intimately connected to information geometry—different choices of F correspond to different Riemannian metrics on the parameter manifold. The Bregman divergence DF(w||wt) arising from generating (potential) function F(w) can be viewed as a measure of curvature. The Bregman divergence includes many well-known divergences commonly used in practice, namely, the squared Euclidean distance, Kullback–Leibler divergence (relative entropy), Itakura-Saito distance, beta divergence and many more [10,14,27,28,29].

2.2. Mirror Descent Update Rules and Geometric Interpretation

Setting the gradient of the objective in Equation (1) to zero yields the implicit update:

f(wt+1)=f(wt)ηwL(wt+1), (3)

or equivalently

wt+1=f(1)f(wt)ηwL(wt+1), (4)

where f(1) is inverse function of the link function. Note that when F is separable and continuous, the inverse function F1 is defined globally (by the inverse function theorem). In general, the implicit function theorem only guarantees local inversion of multivariate functions but not the existence of global inverse functions. However, when F is a multivariate convex function of Legendre-type so is its convex conjugate F* and their gradients are reciprocal to each others globally: F=(F*)1 and F*=(F)1.

Assuming that wL(wt+1)wL(wt), we obtain the explicit Mirror Descent update [2,5]:

wt+1=f(1)f(wt)ηwL(wt) (5)
=F(1)F(wt)ηwL(wt). (6)

In MD, we map our primal point w to the dual space (through the mapping via the link function f(w)=F(w)) and take a step in the direction given by the gradient of the function, then we map back to the primal space by using the inverse function of the link function. The advantage of using Mirror Descent (MD) over Gradient Descent is that it takes into account the geometry of the problem through suitable choice of a link function.

Dual Space Interpretation: Mirror Descent operates by

  1. Mapping to dual space: Θ=f(w)=F(w),

  2. Taking gradient step: Θt+1=ΘtηL(wt),

  3. Mapping back to primal: wt+1=f1(Θt+1).

This three-step process naturally incorporates problem geometry through the choice of link function f (Figure 1).

Figure 1.

Figure 1

Mirror descent is a three-step process: 1. Map parameter with link function to dual space (mirror space), 2. Perform gradient descent in dual space, and 3. Maps back to primal parameter space using the inverse of the link function.

For example, consider F(w)=iwilogwi, the Shannon negative entropy. The link function is f(w)=F(w)=[1+logwi]i with inverse map f1(Θ)=eΘijeΘji. The corresponding mirror update is the exponentiated gradient update:

wt+1=wtexpηwL(wt).

This is a standard and useful algorithm for optimization on the probability simplex that is recovered as the mirror descent with respect to the Kullback–Leibler (KL) divergence (a Bregman divergence). The underlying geometric structure is the KL Hessian geometry, an example of dually flat space in information geometry.

Note that when the generating function F is separable across its coordinates (i.e., F(w)=iF(wi)), the Hessian matrix 2F(w) is diagonal.

2.3. Continuous-Time Formulation and Natural Gradient Connection

The continuous-time limit (as Δt0) yields the mirror flow ODE:

df(w(t))dt=μwL(w(t)), (7)

where μ=η/Δt>0 is the learning rate for continuous-time learning, and f(w)=F(w) is a suitably chosen link function [2]. Using the chain rule, we can write mirror flow as follows

dfwdt=df(w)dwdwdt=diagdf(w)dwdwdt=μwL(w(t)). (8)

Hence, we can obtain continuous-time MD update in alternative form:

dwdt=μdiagdf(w)dw1wL(wt)=μ[2F(w)]1wL(w(t)). (9)

This reveals that Mirror Descent in continuous-time is equivalent to Natural Gradient descent with the Riemannian metric HF(w)=[2F(w)]1. This connection, established rigorously in [2,13], shows that geometric optimization methods are fundamentally unified.

2.4. Discrete Natural Gradient Form

The discrete version becomes

wt+1=[wtηdiagdf(wt)dwt1wL(wt)]+ (10)

where diagdf(w)dw1=diagdf(w)dw11,,df(w)dwN1, which we term Mirror-less Mirror Descent (MMD), representing a first-order approximation to second-order Natural Gradient methods (10) [30].

It should be noted that the above-defined diagonal matrix can be considered as the inverse of the Hessian matrix, if it exists and has positive diagonal entries for a specific set of parameters. The MMD is a special form of Natural Gradient Descent (NGD) [2,7,13].

F. Nielsen provided a geometric interpretation of NG and its connections with the Riemannian gradient, the mirror descent, and the ordinary additive gradient descent [31].

2.5. Canonical Examples and Some Geometric Insights

Case 1: For F(w)=w22/2=12i=1Nwi2 and link function f(w)=wF(w)=w, we obtain the standard (additive) gradient descent

dw(t)dt=μtwL(w(t)) (11)

and its time-discrete approximate version

wt+1=wtηtwL(wt). (12)

Case 2: For F(w)=i=1Nwiln(wi)wi and corresponding (componentwise) link function f(w)=ln(w) we obtain a (multiplicative) Exponentiated Gradient (EG) update also called the unnormalized EG update (or EGU) [20]:

dlnw(t)dt=μwL(w(t)),w(t)>0t. (13)

In this sense, the unnormalized exponentiated gradient update (EGU) corresponds to the discrete-time version of the continuous ODE, obtained via Euler’s rule:

wt+1=expln(wt)μΔtwL(wt)=wtexpηwL(wt),

where ⊙ and exp are componentwise multiplication and componentwise exponentiation, respectively, and η=μΔt>0 is the learning for discrete-time updates. This multiplicative update naturally preserves positivity constraints and corresponds to the natural geometry of the probability simplex.

2.6. Motivation for Using Parameterized Deformed Logarithms

Traditional Mirror Descent methods suffer from geometric rigidity—the fixed choice of mirror map f cannot adapt to diverse problem structures or data distributions. This limitation motivates our investigation of parameterized mirror maps based on trace-form entropies.

Adaptive Geometric Framework: Our approach addresses this fundamental limitation by introducing hyperparameter-controlled mirror maps fΘ(w) that can:

  • Adapt to statistical properties of training distributions.

  • Interpolate between different geometries (e.g., Euclidean, exponential family, power-law).

  • Provide automatic regularization through geometric bias.

  • Enable systematic geometry exploration rather than ad-hoc selection.

Information-Theoretic Foundation: The connection between exponential families and Bregman divergences suggest that optimal mirror maps should reflect the underlying statistical structure of optimization problems. In fact, trace-form entropies provide systematic frameworks for discovering these optimal geometric structures.

There are many potential choices of mirror map f(w) that can model the geometry of various optimization problems and adapt to the distribution of training data. In high dimensions (large-scale optimization), it can be advantageous to abandon the Euclidean geometry to improve convergence rates and performance. Using mirror descent with an appropriately chosen function we can obtain a considerable improvement.

3. Why Trace Entropies and Deformed Logarithms in MD and GEG?

Entropy measures provide natural regularization mechanisms and geometric structures for optimization algorithms. The connection between entropies, information theory, and geometry runs deep; each entropy functional induces a deformed logarithm and a unique Riemannian manifold structure through its associated Fisher information metric [32,33,34,35].

Trace entropies are functionals expressible in the explicit summation form [36,37,38,39,40,41]

S(p)=ipif(1/pi), (14)

where pi are probability values and f(·) is a suitable monotonically increasing function. The term “trace” refers to their mathematical structure, which resembles the trace operation for matrices, i.e., a direct summation over individual components.

Trace entropies are intimately connected to deformed logarithms through f(x)=logD(x), where logD represents a deformed logarithm function with specific mathematical properties ensuring proper entropic behavior [34,38].

A function logD(x) qualifies as a deformed logarithm if it satisfies the following conditions:

  • Domain logD(x): R+R

  • Strictly monotonically increasing: dlogD(x)dx>0

  • Concavity (optional): d2logD(x)dx2<0

  • Scaling and normalization: logD(1)=0, dlogD(x)dx|x=1=1

  • Duality: logD(1/x)=logD˜(x).

These axioms ensure that deformed logarithms generate well-behaved entropy functionals while providing sufficient mathematical flexibility for geometric adaptation. The concavity requirement ensures that resulting entropies satisfy the maximum entropy principle, while duality guarantees symmetric treatment of probabilities and their reciprocals, which are essential for consistent statistical interpretation.

Remark 1. 

It should be noted that since the generating (potential) function F(w), (which is the integral of the link function f(w)) must be strictly convex, it is sufficient that the link function f(w)=logD(w) is a strictly monotonically increasing function, i.e., its first derivative must be positive but the negativity of its second derivative is in the general not necessary.

It should be noted that deformed logarithms and their corresponding deformed exponential functions can be flexibly tuned by one or more hyperparameters, whose optimization enables the adaptation to specific data distributions and problem geometries. By tuning/learning these hyperparameters, we adapt to the distribution of training data and/or we can adjust them to achieve desired properties of gradient descent algorithms.

It is of great importance to understand the mathematical structure of the generalized logarithms and its inverse generalized exponentials in order to obtain more insight into the proposed MD or EG update schemes. Motivated by this fact, and to make this paper more self-contained, we systematically revise fundamental properties of the deformed logarithms and their inverses, generalized exponentials and investigate links between them.

We provide in the Appendix A the basics of q-algebra and κ-algebra and calculus in Appendix A.1 [42].

4. MD and GEG Updates Using the Tsallis Entropy and Its Extensions

4.1. Properties of the Tsallis q-Logarithm and q-Exponential

In physics, the Tsallis entropy is a generalization of the standard Boltzmann–Gibbs entropy [34,43,44]. It is proportional to the expectation of the deformed q-logarithm (referred here as the Tsallis logarithm or termed logarithm) of a distribution.

The Tsallis q-logarithm is defined for x>0 as [45]

logqT(x)=x1q11qforx>0ifq1,ln(x)forx>0ifq=1forx=0q. (15)

The inverse function of the Tsallis q-logarithm is the deformed q-exponential function expqT(x), defined as follows [45]:

expqT(x)=[1+(1q)x]+1/(1q)forx(,1/(1q))ifq<1,x(1/(q1),)ifq>1,exp(x)forq=1. (16)

It is easy to check that these functions satisfy the following relationships:

logqT(expqT(x))=x,(0<expqT(x)<), (17)
expqT(logqT(x))=x,forx>0. (18)

Remark 2. 

The q-deformed exponential and logarithmic functions were introduced in Tsallis statistical physics in 1994 [44]. However, the q-deformation is related to the Box–Cox transformation (for q=1λ), which was proposed in 1964 [46].

The plots of the q-logarithm and q-exponential functions for various values of q are illustrated in Figure 2 and Figure 3.

Figure 2.

Figure 2

Plots of the q-logarithm logqT(x) and q-exponential expqT(x) functions for different values of parameter q. From the figure, one can observe how the q parameter controls the degree of concavity/convexity of the q-logarithm as well as the degree of convexity/concavity of the q-exponential. Since the q-logarithm is convex for q<0, linear for q=0, and strictly concave for q>0, particularizing to the classical logarithm for q=1.

Figure 3.

Figure 3

A 3D plot of the q-logarithm logqT(x). The black continuous line represents the reference of the classical logarithm ln(x), which is obtained for q=1.

It should be noted that q-functions can be approximated by power series as follows:

logqT(x)ln(x)+12(|1q|)(ln(x))2+16(1q)2(ln(x))3+, (19)

and

expqT(x)1+x+12qx2+162q2qx3 (20)
            =exp(x)+12q1x2+162q2q1x3+O(x4). (21)

These functions have the following basic properties [44,47,48,49]:

logqT(x)=log2qT(1/x) (22)
logqT(x)x=1xq>0 (23)
2logqT(x)x2=qxq+1<0forq>0. (24)

It is easy to prove the following fundamental properties:

logqT(xy)=logqT(x)+logqT(y)+(1q)logqT(x))logqT(y)ifx>0,y>0 (25)
expqT(x)expqT(y)=expqT(x+y+(1q)xy). (26)

Using these properties we can define nonlinear generalized algebraic forms q-sum and the q-product (for more details about q-algebra, see [47,50]

xqTy=x+y+(1q)xy,(x1Ty=x+y),         (27)
xqy=[x1q+y1q1]+1/(1q)ifx>0,y>0(x1Ty=xy). (28)

Using this notation, and definitions of q-exponential function (16) and q-logarithm (15), we can write the following formulas

expqT(x+y)=expqT(x)qexpqT(y),for1+(1q)x>0,1+(1q)y>0,1+(1q)(x+y)>0, (29)
         expqT(logqT(x)+y)=xqexpqT(y),         forx>0,1+(1q)y>0,x1q+(1q)y>0. (30)

which play a key role in this paper.

4.2. MD and GEG Updates Using the Tsallis q-Logarithm

Let us assume that the link function in Mirror Descent can take the following componentwise form

fq(w)=logqT(w),w=[w1,,wN]TR+N. (31)

In this case the generating function F(w)=iwilogq(wi)logq1(wi) and the Bregman divergence is a well-known beta divergence [28]:

DFq(wt+1||wt)=i=1Nwi,t+1logq(wi,t+1)logq(wi,t)logq1(wi,t+1)+logq1(wi,t)=i=1Nwi,t+1wi,t+11qwi,t1q1qwi,t+12qwi,t2q2q, (32)

where β=1q,β0.

Applying Equation (6) and taking into account Formula (31), we obtain a novel generalized exponentiated gradient update referred to as q-GEG or q-MD update

wt+1=expqTlogqT(wt)ηwL(wt)=wtqexpqTηwL(wt)) (33)

where qq-product defined by Equation (28) is performed componentwise.

The above q-MD update can be written in a scalar (componentwise) form as

wi,t+1=wi,tqexpqTηwiL(wt))=wi,t1q+expqT(ηwiL(wt)))1q1]+1/1q (34)

By applying the property (31) and substituting yy/(1(1q)x), we can obtain the following identity

expqT(x+y)=expqT(x)expqTy1+(1q)xforx,y(,1/(1q))ifq<1,x,y(1/(q1),)ifq>1, (35)

Hence, we obtain a simplified generalized q-GEG update

wi,t+1=wi,texpqTηwiL(wt)wi,t1q, (36)

which can be written in a compact vector form

wt+1=wtexpqTηtwL(wt) (37)

where a vector of learning rates

ηt=[η1,t,,ηN,t]T,

has entries ηi,t=η/(1+(1q)logqT(wi,t)=ηwi,tq1.

Remark 3. 

Assuming that learning rate is time variable and it is represented by a vector, i.e., ηηwt1α, where ηt=ηwt1αβ, the proposed MD update takes particular form derived and extensively experimentally tested in our recent publication [15], however using a different approach employing alpha-beta divergences [14,27].

4.3. MD and EG Using Schwämmle–Tsallis (ST) Entropy

Schwämmle and Tsallis proposed two-parameter entropy [51]

Sq,qST(p)=i=1wpilogq,qST(1/pi),q1,q1, (38)

where the deformed logarithm referred to as the ST-logarithm or ST (q,q)-logarithm is defined as

logq,qST(x)=logqT([x]q)=logqTelogqT(x)=11qexp1q1q(x1q1)1 (39)

for x>0 and qq (typically q>1 and q<1), where [x]q=exp(logq(x)) and its inverse function is formulated as a two-parameter deformed exponential, expq,q(x)

expq,qST(x)=1+1q1qln(1+(1q)x)1/(1q). (40)

Note that if either parameter q or q, or both, take the value of one, the above functions simplifies to Tsallis q-functions (logarithm and exponential), so we can write

  logq,1ST(x)=log1,qST(x)=logqT(x),log1,1ST(x)=ln(x) (41)
expq,1ST(x)=exp1,qST(x)=expqT(x),exp1,1ST(x)=exp(x). (42)

In the special case for q=q, we obtain

logq,qST(x)=11qexp(x1q1)1=11qexp(1q)logqT(x)1,x>0,q>0 (43)

and

expq,qST(x)=lnx(1q)+1+1+1/(1q). (44)

The plots of the (q,q)-logarithm and (q,q)-exponential for various values of q=q are illustrated in Figure 4.

Figure 4.

Figure 4

Plots of the (q,q)-logarithm and (q,q)-exponential functions for different values of the parameters in special cases when q=q.

Moreover, it is easy to prove the following useful properties

logq,qST(1/x)=log2q,2qST(x), (45)
dlogq.qST(x)dx=xqexp1q1q(x1q1)>0forx>0,q,q,q1. (46)

Defining (q,q)-product as [52]:

xq,qSTy=expqqlogq,q(x)+logq,q(y) (47)

we have the key formulas

expq,qST(x+y)=expq,qST(x)q,qexpq,qST(y), (48)
expq,qST(logq,qST(x)+y)=xq,qexpq,qST(y). (49)

Let us consider now that the link function is defined componentwise as

fq,q(w)=logq,qST(w). (50)

The novel (q,q)-GEG update can take the following form

wt+1=wtq,qexpq,qSTηwL(wt)), (51)

In this case, the update is more complex than in the previous case. An alternative approach is to apply the MMD/NG Formula (10):

wt+1=[wtηdiag{wqexp1q1q(1w1q)}wL(wt)]+ (52)

which can be written equivalently in a scalar form as

wi,t+1=wi,tηwi,tqexp1q1q(1wi,t1q)L(wt)wi]+ (53)

Remark 4. 

Extension to Three Parameters (q,q,r)-Logarithm . Note that by using definition [x]q=exp(logqT(x)) we can write the ST logarithm in a compact form

logq,qST(x)=[x]q1q11q=[exp(logqT(x)]1q11q (54)

Analogously, we can define

[x]q,q=exp(logq,qST(x)). (55)

Hence, we can formulate a three-parameter logarithm as proposed in [53]

logq,q,rCC(x)=([x]q,q1q)1r11r=(exp(logq,qST(x))1r11r (56)

The plots of the (q,q,r)-logarithm and (q,q,r)-exponential for coincident values of q, q, and r are illustrated in Figure 5.

Figure 5.

Figure 5

Plots of the (q,q,r)-logarithm and (q,q,r)-exponential functions when the parameters are coincident q=q=r.

In a similar way, as in the previous section, we can drive MD updates using as a link function the above-defined three-parameter logarithm.

5. MD and GEG Using the Kaniadakis Entropy and Its Extensions and Generalizations

5.1. Basic Properties of κ-Logarithm and κ-Exponential

An entropic structure emerging in the context of special relativity, is the one defined by Kaniadakis [36,37] as follows

Sκ(pi)=ipilogκK(1/pi), (57)

where a deformed κ-logarithm referred to as the Kaniadakis κ-logarithm is defined as [36,37]:

logκK(x)=xκxκ2κ=1κsinh(κln(x))ifx>0and0<κ2<1ln(x)ifx>0andκ=0. (58)

The inverse function of the Kaniadakis κ-logarithm is the deformed exponential function expκK(x), represented as

expκK(x)=exp0xdy1+κ2y2=1+κ2x2+κx1/κ=exp1κarsinh(κx)1<κ<1exp(x)κ=0. (59)

The plots of the κ-logarithm and κ-exponential functions for various values of κ are illustrated in Figure 6 and Figure 7.

Figure 6.

Figure 6

Plots of the κ-logarithm and κ-exponential functions for different values of the parameter κ.

Figure 7.

Figure 7

Surface plots of the κ-logarithm. The black continuous line represents the reference of the classical logarithm ln(x), which is obtained for κ=0.

Note that the Kaniadakis logarithm can also be expressed in terms of the Tsallis logarithm as

logκK(x)=log1+κT(x)+log1κT(x)2. (60)

These functions have the following fundamental and useful properties [37,41]:

             logκK(1)=1,logκK(0+)=,logκK()=+, (61)
logκK(1/x)=logκK(x), (62)
logκK(xλ)=λlogλκK(x), (63)
      logκK(xy)=yκlogκK(x)+xκlogκK(y), (64)
logκ(exp(x))=(1/κ)sinh(κx), (65)
ln(expκ(x))=(1/κ)arsinh(κx), (66)
        logκK(x)x=xκ+xκ2x=cosh[κln(x)]x>0, (67)
               2logκK(x)x2=κ12xκ2κ+12xκ2<0forκ[1,1]. (68)

The last two properties indicate that the Kaniadakis κ-logarithm is monotonically increasing for any value of κ, and for |κ|<1, it is additionally a strictly concave function.

The κ-logarithm can be approximated as a power series

logκ(x)ln(x)+13!κ2[ln(x)]3+15!κ4[ln(x)]5+17!κ6[ln(x)]7,x>0. (69)

Furthermore, it is important to note that applying the Taylor series expansion of the κ-exponential we can obtain a simple approximation as

expκK(x)=1+x+12!x2+13!(1κ2)x3+14!(14κ4)x4+ (70)
=exp(x)13!κ2x314!4κ4x4+. (71)

Two notable features of the κ-exponential function are that it asymptotically approaches a regular exponential function for small x and asymptotically approaches a power law for large value |x| [37,54]. Specifically,

limx0expκ(x)exp(x), (72)
limx±expκ(x)|2κx|±1/|κ|. (73)

The κ-exponential function has the following basic properties [36,37,41]

expκK(0)=1,expκK()=0+,expκK()=+, (74)
             expκK()=0+,expκK(+)=+, (75)
expκK(x)expκK(x)=1, (76)
            expκK(x)r=expκ/rK(rx),rR, (77)
     expκK(x)x>0, (78)
             2expκK(x)x2>0forκ[1,1]. (79)

The last two properties mean that the Kaniadakis κ-exponential is a monotonically increasing and convex function for a specific range of the parameter κ.

The property (79) emerges as a particular case of the more general one

expκK(x)expκK(y)=expκK(x)κexpκK(y), (80)
   logκK(xy)=logκK(x)κlogκK(y), (81)

where κ-addition is defined as

xκy=x1+κ2y2+y1+κ2x2 (82)
         x+y+κ22(xy2+x2y)κ48(xy4+x4y) (83)

By defining and evaluating the κ-product

xκy=expκlogκ(x)+logκ(y) (84)
                =xκxκ2+yκyκ2+1+xκxκ2+yκyκ221/κ (85)
         =exp1κarsinh((xκxκ+yκyκ)/2) (86)
      =1κsinh1κarsinh(κx)arsinh(κy), (87)

we have the key formulas for our MD application

expκK(x+y)=expκK(x)κexpκK(y), (88)
expκK(logκK(x)+y)=xκexpκK(y). (89)

Appendix A.2 gives an overview of the κ-algebra and calculus.

5.2. MD and GEG Using the Kaniadakis Entropy and κ-Logarithm

Let us assume that the link function in Mirror Descent can take the following componentwise form

fκ(wt)=logκK(wt),w=[w1,t,,wN,t]TR+N. (90)

Note that since the first derivative of logκK(w) is positive and second derivative is negative, the link function is a (componentwise) increasing function and additionally a concave function for κ(1,1]. In this case the generating function:

Fκ(wt)=i=1N12κwi,t1+κ1+κwi,t1κ1κ. (91)

Taking into account Formula (6), we obtain a novel κ-GEG update

wt+1=wtκexpκKηwL(wt)) (92)

where κ is a κ-product defined by Equation (87) and is performed componentwise.

5.3. Two-Parameter Logarithms Based on Generalized Kaniadakis–Lissia–Scarfone Entropy

Another important generalized entropy has the following form

Sκ,r(p)=i=1w(pi)r+1piκpiκ2κ=i=1wpilogκ,r(pi), (93)

which was introduced by Sharma, Taneja and Mittal (STM) in [55,56,57], and also investigated, independently, by Kaniadakis, Lissia and Scarfone (KLS) in [38,39,58].

Equation (93) mimics the expression of the Boltzmann–Gibbs entropy by replacing the standard natural logarithm ln(x) with the two-parametric deformed logarithm logκ,r(x) defined as

logκ,r(x)=xrxκxκ2κ,x>0,rR,for|κ|r1/2|1/2|κ||. (94)

The surface plots of the (κ,r)-logarithm for various values of hyperparameters κ and r are illustrated in Figure 8 and Figure 9.

Figure 8.

Figure 8

Surface plots of the (κ,r)-logarithm for various values of hyperparameters κ and r. The left-hand-side figure illustrates the (κ,r)-logarithm in terms of κ and x when r=0.7. The right-hand-side figure illustrates the (κ,r)-logarithm, now in terms of r and x, when κ=0.7. The black dashed line coincides with the κ-logarithm for κ=0.7, since in this case r=0.

Figure 9.

Figure 9

Surface plots of the (κ,r)-logarithm in terms of the hyperparameters κ and r, when x{0.3,0.7}. From the drawings, it is apparent how the changes of the hyperparameter r have a much stronger influence in the magnification of the response, in comparison with the changes in the hyperparameter κ that correspond with more subtle elongations. The figure on the left-hand-side evaluates the (κ,r)-logarithm for x=0.3, whereas the figure on the right-hand-side evaluates it for x=0.7.

The (κ,r)-logarithm can be expressed via the Kaniadakis κ-logarithm and the Tsallis q-logarithm

logκ,r(x)=xrxκxκ2κ=xrlogκK(x)=xrκlog12κT(x). (95)

Obviously, for r=0 the (κ,r)-logarithm simplifies to a Kaniadakis κ-logarithm, and for r=±|κ| one recovers the Tsallis q-logarithm with q=12|κ|, (0<q<2).

By introducing a new parameter ω=r/κ or replacing r=ωκ we can represent the logarithm as

logκ,ω(x)=xκ(ω+1)xκ(ω1)2κ, (96)

which for ω=0 simplifies to κ-logarithm and with ω=1 and κ=(1q)/2 we have a q-logarithm. This formula indicates that this logarithm smoothly interpolates between a Kaniadakis logarithm and Tsallis logarithm.

Summarizing, the (κ,r)-logarithm can be described as follows [38,39]

logκ,rKLS(x)=xr+κxrκ2κifx>0forrR,and|κ|r|κ|,logκK(x)=xκxκ2κifx>0forr=0,κ[1,1],κ0,logqT(x)=x1q11q,ifx>0forr=κ=(1q)/2,q1,q>0,ln(x)ifx>0andr=κ=0. (97)

It should also be noted the (κ,r)-logarithm can be represented approximately by the following power series for relatively small κ:

logκ,rKLSln(x)+12(2r)[ln(x)]2+16(κ2+3r2)[ln(x)]3. (98)

The (κ,r) logarithm has the following basic properties

logκ,r(1)=0,logκ,r(0+)=,forr<|κ|,logκ,r(+)=+,forr>|κ|,
logκ,r(x)=logκ,r(1/x)=logκ,r(x),                  (99)
logκ,r(xλ)=λlogλκ(x),                          (100)
logκ,r(x)x>0,for|κ|r|κ|,                    (101)
2logκ,r(x)x2<0for|κ|r1/2|1/2|κ||.              (102)

The last two properties indicate that the (κ,r)-logarithm is a strictly monotonically increasing function for |κ|r|κ| and additionally a concave function for |κ|r1/2|1/2|κ||.

Remark 5. 

Relation to the Euler Logarithm: It is interesting to note that for r+k=a and rk=b the KLS κ,r–logarithm can be represented as the Euler logarithm [17,59]:

loga,bEu(x)=xaxbab,x>0,a<0,0<b<1, (103)

which is related to the Borges–Rodity entropy [50,60].

Connection to the Schwämmle–Tsallis logarithm: By applying nonlinear transformation in (103) xexp(logq(x)) and a=1q, b=0, we obtain the Schwämmle–Tsallis logarithm (39).

Connection to the Mean Value Theorem: The function has deep connections to the Mean Value Theorem applied to power functions. For the power function g(t)=xt, the Mean Value Theorem guarantees the existence of some parameter c(a,b) such that g(c)=g(b)g(a)ba, which yields:

loga,bEU(x)=xbxaba=c·xc1·ln(x)forc(a,b). (104)

Logarithmic Mean Connection: The function relates to the logarithmic mean L(u,v)=uvlnulnv through the substitution u=xa,v=xb. These connections provide alternative computational approaches and theoretical insights.

Exponential Function Theory: The underlying structure connects to exponential function differentiation rules, where ddtxt=xtln(x), explaining the limiting behavior observed in the analysis.

Computational and numerical considerations: The numerical analysis reveals several important computational aspects:

  • 1. 

    Numerical Stability: The KLS logarithm becomes increasingly stable as x1, but exhibits potential numerical instability for x values far from unity.

  • 2. 

    Parameter Sensitivity: Small x values create higher sensitivity to parameter changes, requiring careful numerical handling.

  • 3. 

    Convergence Properties: The limiting behavior requires special computational treatment using L’Hôpital’s rule.

5.4. Exponential KLS Function and Its Properties

With the existence of expκ,r(x), the inverse function of logκ,r(x) follows from its monotonicity in R although an explicit expression, in general, cannot be given. In other words, the inverse function can not be expressed in a closed analytical form but it can be approximated and expressed, for example, in terms of the Lambert–Tsallis Wq-functions, which are the solution of equations Wq(z)[1+(1q)Wq(z)]+1/(1q)=z:

expκ,r(x)=Wλ+1λλx˜λλ1/(2κ), (105)

where λ=(2κ)/(r+κ), x˜=2κx and Wq is the Lambert–Tsallis function [61].

Another much simpler approach is to use Lagrange’s inversion theorem around 1 to obtain the following rough power series approximation (which may be sufficient for most of our applications):

expκ,r(x)1+x+12(12r)x2+16r+32r216κ2x3+ (106)
=exp(x)rx2+32r2r16κ2x3+O(x4). (107)

Hence, we can represent a (κ,r)-exponential as follows:

expκ,r(x)=exp(x)rx2+32r2r16κ2x3forrR,|κ|r|κ|,|κ|<1,expκK(x)=κx+1+κ2x21/κforr=0,κ[1,1],κ0,expqT(x)=1+(1q)x+1/(1q)forr=κ=(1q)/2,q1,exp(x)forr=κ=0. (108)

Furthermore, the (κ,r)-exponential function has the following fundamental properties:

expκ,r(0)=1,expκ,r()=0+,forr<|κ|,expκ,r()=+,forr>|κ|,
expκ,r(x)expκ,r(x)=1,                                     (109)
(expκ,r(x))λ=expκ/λ,r/λ(λx),                           (110)
expκ,r(x)x>0,for|κ|r|κ|,                       (111)
2expκ,r(x)x2>0,for|κ|r1/2|1/2|κ||                  (112)

The last two properties mean that the (κ,r)-exponential is a monotonically increasing convex function.

Two notable features of the (κ,r)-logarithm and exponential function are that it asymptotically approaches a regular exponential function for small x and asymptotically approaches a power law for large absolute x:

limx0+logκ,r(x)12|κ|x|κ|+r, (113)
limx+logκ,r(x)x|κ|+r2κ,   (114)
limx0expκ,r(x)exp(x), (115)
limx±expκ,r(x)|2κx|1/(r±|κ|). (116)

By defining

xκ,ry=expκ,rlogκ,r(x)+logκ,r(y) (117)

we have the key formulas for our MD (GEG) implementations

expκ,r(x+y)=expκ,r(x)κ,rexpκ,rK(y), (118)
expκ,r(logκ,rK(x)+y)=xκ,rexpκ,rK(y). (119)

Let us assume that the link function is defined as f(w)=logκ,r(w) and its inverse (if approximated version is accepted) f(1)(w)=expκ,r(w), then using a general MD Formula (6), and fundamental properties described above, we obtain a general MD formula employing a wide family of deformed logarithms arising from group entropies or trace-form entropies:

wt+1=expκ,rlogκ,r(wt)ηtL(wt)=wtκ,rexpκ,rηtL(wt)) (120)

where κ,r-multiplication is defined/determined as follows

xκ,ry=expκ,rlogκ,r(x)+logκ,r(y). (121)

Alternatively, due to some complexity of computing precisely expκ,r(x), in the general case, we can use formula MMD/NG (10) to derive a quite flexible and general NG gradient update:

wt+1=[wtηtdiaglogκ,r(wt)wt1L(wt))]+ (122)

where diaglogκ,r(wt)wt1 is a positive-definite diagonal matrix, with the diagonal entries

logκ,r(wt)wi,t1=2κ(r+κ)wi,tr+κ1(rκ)wi,trκ1>0,|κ|r|κ|. (123)

6. Generalization and Normalization of Mirror Descent

Summarizing, all of the GEG updates proposed in this paper can be presented in normalized form (by projecting to unit simplex) in the following general and flexible form

            w˜t+1=wtDexpDηtL^(wt)(Generalizedmultiplicativeupdate), (124)
wt+1=w˜t+1||w˜t+1||1,(Projectiontounitsimplex) (125)

where expD(x) (logD(x)) is a generalized exponential (logarithm), L^(wt)=L(wt/||wt||1) is normalized/scaled loss function, ηt is a vector of the learning rates, L^(wt)=L(wt)(wTL(wt))1 and the generalized D-multiplication is computed as

wtDexpD(gt)=expD(logD(wt)+gt). (126)

Here, logD(wt) and its inverse expD(wt) mean any deformed logarithm and exponential investigated in this paper (i.e., the Tsallis, Kaniadakis, ST, KLS and KS exponential/logarithm).

Alternatively, when the inverse function can not be precisely computed, we can use an MMD/NG additive natural gradient Formula (10), which is expressed in general as

(127)wt+1=[wtηdiagdlogD(wt)dwt1wL^(wt)]+,(128)wt+1=w˜t+1||w˜t+1||1,wtR+N,t

where diagdlogD(w)dw1=diagdlogD(w)dw11,,dlogD(w)dwN1 is a diagonal positive-definite matrix.

7. Conclusions and Discussion

This study establishes a comprehensive framework for applying trace-form entropies and associated deformed logarithms in both Mirror Descent and equivalently Generalized Exponentiated Gradient algorithms. By systematically exploring trace-form entropies, especially Tsallis, Kaniadakis, Scarfone, and Sharma–Taneja–Mittal forms as regularization terms, we unveil new families of mirror gradient descent algorithms that can be tailored to the optimization landscape through suitably chosen hyperparameters. The adoption of these generalized entropies opens the door to obtaining advantageous properties such as improved convergence rates, robustness against vanishing/exploding gradients, and inherent flexibility for handling non-Euclidean geometries. Table 1 summarizes the main results obtained from our study and lists the generalized exponentiated gradient update induced by the deformed exponential functions corresponding to the deformed logarithms used to define the various trace-form entropies.

Table 1.

Overview of the generalized exponentiated gradient (GEG) updates.

Entropy Deformed Exponential MD/GEG Update
Shannon exp(x)=i=0xii! wt+1=wtexp(ηL(wt)) (EG)
Tsallis expqT(x)=[1+(1q)x]+1/(1q)q1exp(x)q=1 wt+1=expqTlogqT(wt)qηwL(wt)
=wtqexpqTηwL(wt) (q-GEG)
Schwämmle-Tsallis expq,qST(x)=1+1q1qln(1+(1q)x)1/(1q) wt+1=wtηdiagwqexp1q1q(1w1qwL(wt)+
Kaniadakis expκK(x)=arcsinh(κx)1<κ<1exp(x)κ=0. wt+1=wtκexpκKηwL(wt)(κ-GEG)
KLS expκ,r(x) wt+1=wtηtdiaglogκ,r(wt)wt1L(wt)+
Generic expD(x) wt+1=wtηdiagdlogD(wt)dwt1wL^(wt)+
wt+1=w˜t+1||w˜t+1||1

The theoretical developments presented not only unify additive and multiplicative gradient update rules via Bregman divergences but also pave the way for designing robust machine learning algorithms that have the ability to adapt precisely to the structure of training data distributions via hyperparameters. Future work will investigate broader classes of entropic functions, extending the framework to non-convex and stochastic optimization settings, applying the proposed approach to practical problems, and performing systematic comparisons through computer simulation experiments.

Acknowledgments

We wish to express our sincere appreciation to Shun-ichi Amari for his profound influence and invaluable contributions and scientific collaborations. His expertise proved instrumental in shaping our collective understanding and continues to inspire our future endeavors.

Appendix A. Deformed Algebra and Calculus

Appendix A.1. q-Algebra and Calculus

In this section we briefly summarize q–Algebra [42,47]:

  • q-sum: xqy=x+y+(1q)xy

  • Neutral element of q-sum: xq0=0qx=x

  • q- substraction: xqy=xy1+(1q)y,y1/(1q)

  • q-product: xqy=x1q+y1q+1+1/(1q)

  • Neutral element of q-product: xq1=1qx=x

  • q-division: xqy=xq(1/y)=x1qy1q+1+1/(1q)

  • Inverse of q-product 1qx=2x1q+1/(1q).

Appendix A.2. κ–Algebra and Calculus

In this section we briefly summarize κ–Algebra, especially κ-product properties [36,37,38,39]:

  • κ-sum: xκy=x1+κ2y2+y1+κ2x2,

  • Neutral element of κ-sum: xκ0=0κx=x,

  • κ- substraction: xκy=x1+κ2y2y1+κ2x2,

  • κ-product: xκy=exp1κarsinh((xκxκ+yκyκ)/2),

  • Admits the unity as a neutral element of κ-product: xκ1=1κx=x.

  • κ-product is commutative: xκy=yκx

  • κ-product is associative: (xκy)κz=xκ(yκz)

  • The inverse element of x is 1/x, i.e., xκ(1/x)=1

  • κ-division: xκy=xκ(1/y)=exp1κarsinh((xκxκyκ+yκ)/2)

  • Inverse of κ-product 1κx=x1.

Author Contributions

Conceptualization, A.C.; writing–original draft, A.C. and T.T.; editing and visualization, S.C. and F.N.; validation, F.N. and S.C.; formal analysis, A.C., T.T., F.N. and S.C.; supervision and project administration, A.C. All authors have read and agreed to the published version of the manuscript.

Data Availability Statement

No new data were created or analyzed in this study due theoretical character of this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Funding Statement

S. Cruces was supported in part by the MICIU/AEI/10.13039/501100011033 under Grant PID2021-123090NB-I00, in part by ERDF/EU.

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

References

  • 1.Nemirovsky A., Yudin D.B. Problem Complexity and Method Efficiency in Optimization. John Wiley and Sons; Hoboken, NJ, USA: 1983. [DOI] [Google Scholar]
  • 2.Amid E., Warmuth M.K. Reparameterizing Mirror Descent as Gradient Descent; Proceedings of the 34th International Conference on Neural Information Processing Systems (NIPS’20); Vancouver, BC, Canada. 6–12 December 2020; Red Hook, NY, USA: Curran Associates Inc.; 2020. pp. 8430–8439. [Google Scholar]
  • 3.Amid E., Warmuth M.K. Winnowing with Gradient Descent; Proceedings of the 33rd International Conference on Algorithmic Learning Theory, PMLR 125; Graz, Austria. 9–12 July 2020; pp. 163–182. [Google Scholar]
  • 4.Ghai U., Hazan E., Singer Y. Exponentiated Gradient Meets Gradient Descent; Proceedings of the 31st International Conference on Algorithmic Learning Theory, PMLR 117; San Diego, CA, USA. 8–11 February 2020; pp. 386–407. [DOI] [Google Scholar]
  • 5.Shalev-Shwartz S. Online learning and online convex optimization. Found. Trends Mach. Learn. 2011;4:107–194. doi: 10.1561/2200000018. [DOI] [Google Scholar]
  • 6.Beck A., Teboulle M. Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett. 2003;31:167–175. doi: 10.1016/S0167-6377(02)00231-6. [DOI] [Google Scholar]
  • 7.Amari S. Natural gradient works efficiently in learning. Neural Comput. 1998;10:251–276. doi: 10.1162/089976698300017746. [DOI] [Google Scholar]
  • 8.Amari S. Information Geometry and Its Applications. Volume 194 Springer; Berlin/Heidelberg, Germany: 2016. [Google Scholar]
  • 9.Amari S. Information Geometry and Its Applications: Convex Function and Dually Flat Manifold. In: Nielsen F., editor. Emerging Trends in Visual Computing. Springer; Berlin/Heidelberg, Germany: 2009. pp. 75–102. Springer Lecture Notes in Computer Science. [Google Scholar]
  • 10.Amari S. Alpha-divergence is unique, belonging to both f-divergence and Bregman divergence classes. IEEE Trans. Inf. Theory. 2009;55:4925–4931. doi: 10.1109/TIT.2009.2030485. [DOI] [Google Scholar]
  • 11.Amari S., Cichocki A. Information geometry of divergence functions. Bull. Pol. Acad. Sci. 2010;58:183–195. doi: 10.2478/v10175-010-0019-1. [DOI] [Google Scholar]
  • 12.Amid E., Nielsen F., Nock R., Warmuth M.K. Optimal transport with tempered exponential measures. Proc. AAAI Conf. Artif. Intell. 2024;38:10838–10846. doi: 10.1609/aaai.v38i10.28957. [DOI] [Google Scholar]
  • 13.Raskutti G., Mukherjee S. The information geometry of mirror descent. IEEE Trans. Inf. Theory. 2015;61:1451–1457. doi: 10.1109/TIT.2015.2388583. [DOI] [Google Scholar]
  • 14.Cichocki A., Cruces S., Amari S.I. Generalized alpha-beta divergences and their application to robust nonnegative matrix factorization. Entropy. 2014;13:134–170. doi: 10.3390/e13010134. [DOI] [Google Scholar]
  • 15.Cichocki A., Cruces S., Sarmiento A., Tanaka T. Generalized Exponentiated Gradient Algorithms and Their Application to On-Line Portfolio Selection. IEEE Access. 2024;12:197000–197020. doi: 10.1109/ACCESS.2024.3520389. [DOI] [Google Scholar]
  • 16.Kainth A.S., Wong T.-K.L., Rudzicz F. Conformal mirror descent with logarithmic divergences. Inf. Geom. 2024;7((Suppl. 1)):303–327. doi: 10.1007/s41884-022-00089-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Cichocki A. Generalized Exponentiated Gradient Algorithms Using the Euler Two-Parameter Logarithm. arXiv. 2025 doi: 10.48550/arXiv.2502.17500.2502.17500 [DOI] [Google Scholar]
  • 18.Cichocki A. Mirror Descent Using the Tempesta Generalized Multi-parametric Logarithms. arXiv. 20252506.13984 [Google Scholar]
  • 19.Helmbold D.P., Schapire R.E., Singer Y., Warmuth M.K. On-line Portfolio Selection Using Multiplicative Updates. Math. Financ. 1998;8:325–347. doi: 10.1111/1467-9965.00058. [DOI] [Google Scholar]
  • 20.Kivinen J., Warmuth M.K. Exponentiated Gradient versus Gradient Descent for Linear Predictors. Inf. Comput. 1997;132:1–63. doi: 10.1006/inco.1996.2612. [DOI] [Google Scholar]
  • 21.Kivinen J., Warmuth M.K. Additive Versus Exponentiated Gradient Updates for Linear Prediction; Proceedings of the Twenty-Seventh Annual ACM Symposium on Theory of Computing; Las Vegas, NV, USA. 29 May–1 June 1995; pp. 209–218. [DOI] [Google Scholar]
  • 22.Bregman L. The relaxation method of finding a common point of convex sets and its application to the solution of problems in convex programming. Comp. Math. Phys. USSR. 1967;7:200–217. doi: 10.1016/0041-5553(67)90040-7. [DOI] [Google Scholar]
  • 23.Burachik R.S., Dao M.N., Lindstrom S.B. The generalized Bregman distance. SIAM J. Optim. 2021;31:404–424. doi: 10.1137/19M1288140. [DOI] [Google Scholar]
  • 24.Martinez-Legaz J.E., Tamadoni Jahromi M., Naraghirad E. On Bregman-type distances and their associated projection mappings. J. Optim. Theory Appl. 2022;193:107–117. doi: 10.1007/s10957-021-01970-4. [DOI] [Google Scholar]
  • 25.Nielsen F., Nock R. Generalizing skew Jensen divergences and Bregman divergences with comparative convexity. IEEE Signal Process. Lett. 2017;24:1123–1127. doi: 10.1109/LSP.2017.2712195. [DOI] [Google Scholar]
  • 26.Nock R., Nielsen F. Bregman divergences and surrogates for learning. IEEE Trans. Pattern Anal. Mach. Intell. 2008;31:2048–2059. doi: 10.1109/TPAMI.2008.225. [DOI] [PubMed] [Google Scholar]
  • 27.Cichocki A., Amari S.I. Families of α-β-and γ-divergences: Flexible and robust measures of similarities. Entropy. 2010;12:1532–1568. doi: 10.3390/e12061532. [DOI] [Google Scholar]
  • 28.Cichocki A., Zdunek R., Phan A.H., Amari S.I. Nonnegative Matrix and Tensor Factorizations. John Wiley and Sons; Hoboken, NJ, USA: 2009. Multiplicative Iterative Algorithms for NMF with Sparsity Constraints; pp. 131–202. Chapter 3. [DOI] [Google Scholar]
  • 29.Cichocki A., Zdunek R., Amari S. Independent Component Analysis and Signal Separation. Volume 3889. Springer; Berlin/Heidelberg, Germany: 2006. Csiszár’s Divergences for Nonnegative Matrix Factorization: Family of New Algorithms; pp. 32–39. [Google Scholar]
  • 30.Gunasekar S., Woodworth B., Srebro N. Mirrorless Mirror Descent: A Natural Derivation of Mirror Descent; Proceedings of the 24th International Conference on Artificial Intelligence and Statistics (AISTATS) 2021; San Diego, CA, USA. 13–15 April 2021; pp. 2305–2313. [Google Scholar]
  • 31.Nielsen F. A Note on the Natural Gradient and Its Connections with the Riemannian Gradient, the Mirror Descent, and the Ordinary Gradient. Github Report. [(accessed on 1 August 2020)]. Available online: https://franknielsen.github.io/blog/NaturalGradientConnections/NaturalGradientConnections.pdf.
  • 32.Hristopulos D.T., da Silva S.L.E., Scarfone A.M. Twenty Years of Kaniadakis Entropy: Current Trends and Future Perspectives. Entropy. 2025;27:247. doi: 10.3390/e27030247. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Naudts J. Deformed exponentials and logarithms in generalized thermostatistics. Phys. A Stat. Mech. Its Appl. 2002;316:323–334. doi: 10.1016/S0378-4371(02)01018-X. [DOI] [Google Scholar]
  • 34.Tsallis C. Entropy. Encyclopedia. 2022;2:264–300. doi: 10.3390/encyclopedia2010018. [DOI] [Google Scholar]
  • 35.Wada T., Scarfone A.M. Finite difference and averaging operators in generalized entropies. J. Phys. Conf. Ser. 2010;201:012005. doi: 10.1088/1742-6596/201/1/012005. [DOI] [Google Scholar]
  • 36.Kaniadakis G., Scarfone A.M. A new one-parameter deformation of the exponential function. Phys. A Stat. Mech. Its Appl. 2002;305:69–75. doi: 10.1016/S0378-4371(01)00642-2. [DOI] [Google Scholar]
  • 37.Kaniadakis G. Statistical mechanics in the context of special relativity. Phys. Rev. E. 2002;66:056125. doi: 10.1103/PhysRevE.66.056125. [DOI] [PubMed] [Google Scholar]
  • 38.Kaniadakis G., Lissia M., Scarfone A.M. Deformed logarithms and entropies. Phys. A Stat. Mech. Its Appl. 2004;340:41–49. doi: 10.1016/j.physa.2004.03.075. [DOI] [Google Scholar]
  • 39.Kaniadakis G., Lissia M., Scarfone A.M. Two-parameter deformations of logarithm, exponential, and entropy: A consistent framework for generalized statistical mechanics. Phys. Rev. E. 2005;71:046128. doi: 10.1103/PhysRevE.71.046128. [DOI] [PubMed] [Google Scholar]
  • 40.Tempesta P. A theorem on the existence of trace-form generalized entropies. Proc. R. Soc. A Math. Phys. Eng. Sci. 2015;471:20150165. doi: 10.1098/rspa.2015.0165. [DOI] [Google Scholar]
  • 41.Wada T., Scarfone A.M. On the Kaniadakis distributions applied in statistical physics and natural sciences. Entropy. 2023;25:292. doi: 10.3390/e25020292. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Gomez I.S., Borges E.P. Algebraic structures and position-dependent mass Schrödinger equation from group entropy theory. Lett. Math. Phys. 2021;111:43. doi: 10.1007/s11005-021-01387-0. [DOI] [Google Scholar]
  • 43.Tsallis C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988;52:479–487. doi: 10.1007/BF01016429. [DOI] [Google Scholar]
  • 44.Tsallis C. What are the numbers that experiments provide. Quim. Nova. 1994;17:468–471. [Google Scholar]
  • 45.Ishige K., Salani P., Takatsu A. Hierarchy of deformations in concavity. Inf. Geom. 2024;7((Suppl. 1)):251–269. doi: 10.1007/s41884-022-00088-4. [DOI] [Google Scholar]
  • 46.Box G.E.P., Cox D.R. An Analysis of Transformations. J. R. Stat. Soc. Ser. B. 1964;26:211–252. doi: 10.1111/j.2517-6161.1964.tb00553.x. [DOI] [Google Scholar]
  • 47.Borges E.P. A possible deformed algebra and calculus inspired in nonextensive thermostatistics. Phys. A Stat. Mech. Its Appl. 2004;340:95–101. doi: 10.1016/j.physa.2004.03.082. [DOI] [Google Scholar]
  • 48.Yamano T. Some properties of q-logarithm and q-exponential functions in Tsallis statistics. Phys. A Stat. Mech. Its Appl. 2002;305:486–496. doi: 10.1016/S0378-4371(01)00567-2. [DOI] [Google Scholar]
  • 49.Nock R., Amid E., Warmuth M.K. Boosting with Tempered Exponential Measures. arXiv. 2023 doi: 10.48550/arXiv.2306.05487.2306.05487 [DOI] [Google Scholar]
  • 50.Borges E.P., Roditi I. A family of nonextensive entropies. Phys. Lett. A. 1998;246:399–402. doi: 10.1016/S0375-9601(98)00572-6. [DOI] [Google Scholar]
  • 51.Schwämmle V., Tsallis C. Two-parameter generalization of the logarithm and exponential functions and Boltzmann-Gibbs-Shannon entropy. J. Math. Phys. 2007;48:113301. doi: 10.1063/1.2801996. [DOI] [Google Scholar]
  • 52.Cardoso P.G., Borges E.P., Lobao T.C., Pinho S.T. Nondistributive algebraic structures derived from nonextensive statistical mechanics. J. Math. Phys. 2008;49:093509. doi: 10.1063/1.2982233. [DOI] [Google Scholar]
  • 53.Corcino C.B., Corcino R.B. Three-Parameter Logarithm and Entropy. J. Funct. Spaces. 2020;2020:9791789. doi: 10.1155/2020/9791789. [DOI] [Google Scholar]
  • 54.Kaniadakis G. Maximum entropy principle and power-law tailed distributions. Eur. Phys. J. B. 2009;70:3–13. doi: 10.1140/epjb/e2009-00161-0. [DOI] [Google Scholar]
  • 55.Mittal D.P. On some functional equations concerning entropy, directed divergence and inaccuracy. Metrika. 1975;22:35–45. doi: 10.1007/BF01899712. [DOI] [Google Scholar]
  • 56.Sharma B.D., Taneja I.J. Entropy of type (α, β) and other generalized measures in information theory. Metrika. 1975;22:205–215. doi: 10.1007/BF01899728. [DOI] [Google Scholar]
  • 57.Taneja I.J. On generalized information measures and their applications. Adv. Electron. Electron Phys. 1989;76:327–413. [Google Scholar]
  • 58.Scarfone A.M., Suyari H., Wada T. Gauss law of error revisited in the framework of Sharma-Taneja-Mittal information measure. Cent. Eur. J. Phys. 2009;7:414–420. doi: 10.2478/s11534-009-0002-3. [DOI] [Google Scholar]
  • 59.Kaniadakis G., Scarfone A.M., Sparavigna A., Wada T. Composition law of κ-entropy for statistically independent systems. Phys. Rev. E. 2017;95:052112. doi: 10.1103/PhysRevE.95.052112. [DOI] [PubMed] [Google Scholar]
  • 60.Furuichi S. An axiomatic characterization of a two-parameter extended relative entropy. J. Math. Phys. 2010;51:123302. doi: 10.1063/1.3525917. [DOI] [Google Scholar]
  • 61.Da Silva G.B., Ramos R.V. The Lambert-Tsallis Wq function. Phys. A Stat. Mech. Its Appl. 2019;525:164–170. doi: 10.1016/j.physa.2019.03.046. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

No new data were created or analyzed in this study due theoretical character of this study.


Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES