Skip to main content
Entropy logoLink to Entropy
. 2022 Nov 14;24(11):1654. doi: 10.3390/e24111654

Tsallis Entropy for Loss Models and Survival Models Involving Truncated and Censored Random Variables

Vasile Preda 1,2,3, Silvia Dedu 2,4,*, Iuliana Iatan 5, Ioana Dănilă Cernat 3, Muhammad Sheraz 6,7
Editor: Karagrigoriou Alexandros
PMCID: PMC9689868  PMID: 36421509

Abstract

The aim of this paper consists in developing an entropy-based approach to risk assessment for actuarial models involving truncated and censored random variables by using the Tsallis entropy measure. The effect of some partial insurance models, such as inflation, truncation and censoring from above and truncation and censoring from below upon the entropy of losses is investigated in this framework. Analytic expressions for the per-payment and per-loss entropies are obtained, and the relationship between these entropies are studied. The Tsallis entropy of losses of the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u is computed for the exponential, Weibull, χ2 or Gamma distribution. In this context, the properties of the resulting entropies, such as the residual loss entropy and the past loss entropy, are studied as a result of using a deductible and a policy limit, respectively. Relationships between these entropy measures are derived, and the combined effect of a deductible and a policy limit is also analyzed. By investigating residual and past entropies for survival models, the entropies of losses corresponding to the proportional hazard and proportional reversed hazard models are derived. The Tsallis entropy approach for actuarial models involving truncated and censored random variables is new and more realistic, since it allows a greater degree of flexibility and improves the modeling accuracy.

Keywords: Tsallis entropy measures, risk assessment, survival models, loss models, truncation, censoring, proportional hazard model, proportional reversed hazard model

1. Introduction

Risk assessment represents an important topic in various fields, since it allows designing the optimal strategy in many real-world problems. The fundamental concept of entropy can be used to evaluate the uncertainty degree corresponding to the result of an experiment, phenomenon or random variable. Recent research results in statistics prove the increased interest for using different entropy measures. Many authors have dealt with this matter, among them are Koukoumis and Karagrigoriou [1], Iatan et al. [2], Li et al. [3], Miśkiewicz [4], Toma et. al. [5], Moretto et al. [6], Remuzgo et al. [7], Sheraz et al. [8] and Toma and Leoni-Aubin [9]. One of the most important information measures, the Tsallis entropy, has attracted considerable interest in statistical physics and many other fields as well. We can mention here the contributions of Nayak et al. [10], Pavlos et al. [11] and Singh and Cui [12]. Recently, Balakrishnan et al. [13] proposed a general formulation of a class of entropy measures depending on two parameters, which includes Shannon, Tsallis and fractional entropy as special cases.

As entropy can be regarded as a measure of variability for absolutely continuous random variables or measure of variation or diversity of the possible values of a discrete random variable, it can be used for risk assessment in various domains. In actuarial science, one of the main objectives which defines the optimal strategy of an insurance company is directed towards minimizing the risk of the claims. Ebrahimi [14] and Ebrahimi and Pellerey [15] studied the problem of measuring uncertainty in life distributions. The uncertainty corresponding to loss random variables in actuarial models can be evaluated also by the entropy of the loss distribution. Frequently in actuarial practice, as a consequence of using deductibles and policy limits, the practitioners have to deal with transformed data, generated by truncation and censoring. Baxter [16] and Zografos [17] developed information measure methods for mixed and censored random variables, respectively. The entropic approach enables the assessment of the uncertainty degree for loss models involving truncated and censored random variables. Sachlas and Papaioannou [18] investigated the effect of inflation, truncation or censoring from below or above on the Shannon entropy of losses of insurance policies. In this context of per-payment and per-loss models, they derived analytic formulas for the Shannon entropy of actuarial models involving several types of partial insurance coverage and studied the properties of the resulting entropies. Recent results in this field have also been obtained by Gupta and Gupta [19] and Di Crescenzo and Longobardi [20], Meselidis and Karagrigoriou [21].

This paper aims to develop several entropy-based risk models involving truncated and censored loss random variables. In this framework, the effect of some partial insurance schemes, such truncation and censoring from above, truncation and censoring from below and inflation is investigated using the Tsallis entropy. The paper is organized as follows. In Section 2 some preliminary results are presented. In Section 3 representation formulas for the Tsallis entropy corresponding to the truncated and censored loss random variables in the per-payment and per-loss approach are derived, and the relationships between these entropies are obtained. Moreover, the combined effect of a deductible and a policy limit is investigated. In Section 4, closed formulas for the Tsallis entropy corresponding to some survival models are derived, including the proportional hazard and the proportional reversed hazard models. Some concluding remarks are provided in the last section.

2. Preliminaries

2.1. The Exponential Distribution

An exponential distributed random variable XExp(λ) is defined by the probability density function:

f(x)=λ·eλx,ifx00,ifx<0, (1)

with λR,λ>0 and the cumulative distribution function:

FX(x)=1eλx,x0. (2)

2.2. The Weibull Distribution

A Weibull distributed random variable XW(α,λ,γ) is closely related to an exponential distributed random variable and has the probability density function:

f(x)=γλ(xα)γ1·eλ(xα)γ,ifxα0,ifx<α, (3)

with α,λ,γR,λ,γ > 0.

If XExp(1), then the Weibull distribution can be generated using the formula:

W=α+Xλ1γ. (4)

2.3. The χ2 Distribution

Let Zi,1iγ be independent random variables, Gaussian distributed and N(0,1). A random variable χ2 with γ degrees of freedom can be represented as:

χ2=i=1γZi2,γN*. (5)

A χ2 distributed random variable with γ degrees of freedom is represented by the probability density function:

f(x)=12γ2Γγ2xγ21ex2,x0, (6)

where Γ denotes the Euler Gamma function.

2.4. The Gamma Distribution

An exponential distributed random variable XG(α,λ,γ) is defined by the probability density function [22]:

f(x)=λγΓ(γ)(xα)γ1·eλ(xα),ifxα0,ifx<α, (7)

where αR,γ,λ>0 are, respectively, the location parameter, the scale parameter and the form parameter of the variable X.

We can notice that an exponential distributed random variable is a gamma random variable G(0,λ,1) and a χ2 distributed random variable is a gamma distributed random variable G(0,12,γ2).

If YG(α,λ,γ2) and ZG(0,12,γ2), then we have:

Y=α+Z2λ. (8)

2.5. The Tsallis Entropy

Entropy represents a fundamental concept which can be used to evaluate the uncertainty associated with a random variable or with the result of an experiment. It provides information regarding the predictability of the results of a random variable X. The Shannon entropy, along with other measures of information, such as the Renyi entropy, may be interpreted as a descriptive quantity of the corresponding probability density function.

Entropy can be regarded as a measure of variability for absolutely continuous random variables or as a measure of variation or diversity of the possible values of discrete random variables. Due to the widespread applicability and use of information measures, the derivation of explicit expressions for various entropy and divergence measures corresponding to univariate and multivariate distributions has been a subject of interest; see, for example, Pardo [23], Toma [24], Belzunce et al. [25], Vonta and Karagrigoriou [26]. Various measures of entropy and generalizations thereof have been proposed in the literature.

The Tsallis entropy was introduced by Constantino Tsallis in 1988 [27,28,29,30] with the aim of generalizing the standard Boltzmann–Gibbs entropy and, since then, it has attracted considerable interest in the physics community, as well as outside it. Recently, Furuichi [31,32] investigated information theoretical properties of the Tsallis entropy and obtained a uniqueness theorem for the Tsallis entropy. The use of Tsallis entropy enhances the analysis and solving of some important problems regarding financial data and phenomena modeling, such as the distribution of asset returns, derivative pricing or risk aversion. Recent research in statistics increased the interest in using Tsallis entropy. Trivellato [33,34] used the minimization of the divergence corresponding to the Tsallis entropy as a criterion to select a pricing measure in the valuation problems of incomplete markets and gave conditions on the existence and on the equivalence to the basic measure of the minimal k–entropy martingale measure. Preda et al. [35,36] used Tsallis and Kaniadakis entropies to construct the minimal entropy martingale for semi-Markov regime switching interest rate models and to derive new Lorenz curves for modeling income distribution. Miranskyy et al. [37] investigated the application of some extended entropies, such as Landsberg–Vedral, Rényi and Tsallis entropies to the classification of traces related to various software defects.

Let X be a real-valued discrete random variable defined on the probability space (Ω,F,P), with the probability mass function pX. Let αR{1}. We introduce the definition of Tsallis entropy [27] for discrete and absolutely continuous random variables in terms of expected value operator with respect to a probability measure.

Definition 1. 

The Tsallis entropy corresponding to the discrete random variable X is defined by:

HαT(X)=EpXpX(x)α11α1, (9)

where EpX· represents the expected value operator with respect to the probability mass function pX.

Let X be a real-valued continuous random variable defined on the probability space (Ω,F,P), with the probability density function fX. Let αR{1}.

Definition 2. 

The Tsallis entropy corresponding to the continuous random variable X is defined by:

HαT(fX)=EfXfX(x)α11α1, (10)

provided that the integral exists, where EfX· represents the expected value operator with respect to the probability density function fX.

In the sequel, we suppose to know the properties of the expected value operator, such as additivity and homogeneity.

Note that for α=2, the Tsallis entropy reduces to the second-order entropy [38] and for α1, we obtain the Shannon entropy [39]. The real parameter α was introduced in the definition of Tsallis entropy for evaluating more accurately the degree of uncertainty. In this regard, the Tsallis parameter tunes the importance assigned to rare events in the considered model.

Highly uncertain insurance policies are less reliable. The uncertainty for the loss associated to an insurance policy can be quantified by using the entropy of the corresponding loss distribution. In the actuarial practice, frequently transformed data are available as a consequence of deductibles and liability limits. Recent research in statistics increased the interest for using different entropy measures for risk assessment.

3. Tsallis Entropy Approach for Loss Models

We denote by X the random variable which models the loss corresponding to an insurance policy. We suppose that X is non-negative and denote by fX and FX its probability density function and cumulative distribution function, respectively. Let SX be the survival function of the random variable X, defined by SX(x)=P(X>x).

We consider truncated and censored random variables obtained from X, which can be used to model situations which frequently appear in actuarial practice as a consequence of using deductibles and policy limits. In the next subsections, analytical expressions for the Tsallis entropy are derived, corresponding to the loss models based on truncated and censored random variables.

3.1. Loss Models Involving Truncation or Censoring from Below

Loss models with left-truncated or censored from below random variables are used when losses are not recorded or reported below a specified threshold, mainly as a result of applying deductible policies. We denote by d the value of the threshold, referred to as the deductible value. According to Kluggman et al. [40], there are two approaches used to express the random variable which models the loss, corresponding to the per-payment and per-loss cases, respectively.

In the per-payment case, losses or claims below the value of the deductible may not be reported to the insurance company, generating truncated from below or left-truncated data.

We denote by Xlt(d) the left-truncated random variable which models the loss corresponding to an insurance policy with a deductible d in the per-payment case. It can be expressed as Xlt(d)=X|X>d, or equivalently:

Xlt(d)=X,ifX>dnotdefined,ifXd. (11)

In order to investigate the effect of truncation from bellow, we use the Tsallis entropy for evaluating uncertainty corresponding to the loss covered by the insurance company. The following theorem establishes the relationship between the Tsallis entropy of the random variables X and Xlt(d). We denote by HαT(Xlt(d)) the per-payment Tsallis entropy with a deductible d.

We denote by IA the indicator function of the set A, defined by:

IA(x)=1,ifxA0,otherwise. (12)

In the sequel, the integrals are always supposed to be correctly defined.

Theorem 1. 

Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let αR{1} and d>0. The Tsallis entropy HαT(Xlt(d)) of the left-truncated loss random variable corresponding to the per-payment risk model with a deductible d can be expressed as follows:

HαT(Xlt(d))=SXα(d)HαT(X)+EfXfXα1(x)1α1I0<X<d+1SX1α(d)α1. (13)

Proof. 

The probability density function of the random variable Xlt(d) is given by fXlt(d)(x)=fX(x)SX(d), x>d. Therefore, the Tsallis entropy of the random variable Xlt(d) can be expressed as follows:

HαT(Xlt(d))=1α1SX(d)EfXfX(x)SX(d)α11Id<X<
=1SX(d)SX1α(d)EfXfXα1(x)1α1Id<X<+SX1α(d)1α1EfXId<X<
=1SX(d)SX1α(d)EfXfXα1(x)1α1SX1α(d)EfXfXα1(x)1α1I0<X<d
1SX(d)SX1α(d)1α1EfXId<X<=
=Sα(d)HαT(X)+EfXfXα1(x)1α1I0<X<d+1SX1α(d)α1.

Remark 1. 

For the limiting case α1, we obtain the corresponding results for the Shannon entropy from [18].

In the per-loss case corresponding to an insurance policy with a deductible d, all the claims are reported, but only the ones over the deductible value are paid. As only the real losses of the insurer are taken into consideration, this situation generates censored from below data.

We denote by Xlc(d) the left-censored random variable which models the loss corresponding to an insurance policy with a deductible d in the per-loss case. As X is censored from below at point d, it results that the random variable Xlc(d) can be expressed as follows:

Xlc(d)=X,ifX>d0,ifXd. (14)

We note that Xlc(d) assigns a positive probability mass at zero point, corresponding to the case Xd. In this case, Xlc(d) it not absolutely continuous, but a mixed random variable, consisting of a discrete and a continuous part. We can remark that the per-payment loss random variable can be expressed as the per-loss one given that the later is positive.

In the next theorem, the relation between the Tsallis entropy of the random variables X and Xlc(d) is established.

Theorem 2. 

Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let αR{1} and d>0. The Tsallis entropy HαT(Xlc(d)) of the left-censored loss random variable corresponding to the per-payment risk model with a deductible d can be expressed as follows:

HαT(Xlc(d))=HαT(X)+EfXfXα1(x)1α1I0<X<dFX(d)FXα1(d)1α1. (15)

Proof. 

The Tsallis entropy of Xlc(d), which is a mixed random variable consisting of a discrete part at zero and a continuous part over (d,+), is given by:

HαT(Xlc(d))=EfXfXα1(x)1α1Id<X<FX(d)FXα1(d)1α1
=EfXfXα1(x)1α1+EfXfXα1(x)1α1I0<X<dFX(d)FXα1(d)1α1

and the conclusion follows. □

Remark 2. 

Let αR{1} and d>0. Then,

HαTXlcdHαTX=EfXfXα1(x)1α1I0<X<dFX(d)FXα1(d)1α1.

It results that the Tsallis entropy of the left-censored loss random variable corresponding to the per-loss risk model is greater than the Tsallis entropy of the loss random variable, and the difference can be quantified by the right-hand side of the formula above.

Let λ>0, αR{1} and d>0. Let X be Exp(λ) distributed and denoted by φlc(d,α,λ)=HαTXlcdHαTX. Using Theorem 2, we obtain

φlc(d,α,λ)=1α11αλαλedλλαedαλ+λααλedλ1edλα+1.

Figure 1 displays the graph of φlc function for λ=100 and different values of the Tsallis entropy parameter α.

Figure 1.

Figure 1

The graph for λ=100 and different values of the Tsallis entropy parameter α.

Theorem 3. 

Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let αR{1}. The Tsallis entropy measures HαT(Xlt(d)) and HαT(Xlc(d)) are connected through the following relationship:

Sα(d)·HαT(Xlt(d))HαT(Xlc(d))=HαTBF(d), (16)

where BFX(d) represents a Bernoulli distributed random variable with parameter FX(d).

Proof. 

By multiplying (13) with Sα(d), we obtain:

SXα(d)·HαT(Xlt(d))=HαT(X)+EfXfXα1(x)1α1I0<X<dSXα(d)SXα1(d)1α1.

From Theorem 2, we have:

HαT(Xlc(d))=HαT(X)+EfXfXα1(x)1α1I0<X<dFX(d)FXα1(d)1α1.

By subtracting the two relations above, we obtain:

SXα(d)·HαT(Xlt(d))HαT(Xlc(d))=SX(d)SXα1(d)1α1+
+FX(d)FXα1(d)1α1=HαTBFX(d).

Now, we denote by λ(x)=fX(x)SX(x), for SX(x)>0, the hazard rate function of the random variable X. In the next theorem, the per-payment simple or residual entropy with a deductible d is expressed in terms of the hazard or risk function of X.

Theorem 4. 

Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let αR{1}. The Tsallis entropy of the left-truncated loss random variable corresponding to the per-payment risk model with a deductible d is given by:

HαT(Xlt(d))=SXα(d)EfXλα1(x)1α1SXα1(x)Id<X<+1α. (17)

Proof. 

From Theorem 1, we have:

HαT(Xlt(d))=SXα(d)HαT(X)+EfXfXα1(x)1α1I0<X<d+1SX1α(d)α1
=SXα(d)EfXfXα1(x)1α1+EfXfXα1(x)1α1I0<X<d+1SX1α(d)α1
=SXα(d)EfXfXα1(x)1α1Id<X<+1SX1α(d)α1.

We have:

EfXfXα1(x)1α1Id<X<=1α1EfXfX(x)SX(x)α1·SXα1(x)1Id<X<
=EfXλα1(x)1α1·SXα1(x)Id<X<+EfXSXα1(x)1α1Id<X<

Integrating by parts the second term from the relation above, we obtain:

EfXSXα1(x)1α1Id<X<=SXα(d)SX(d)α1EfXSXα1(x)Id<X<=
=SXα(d)SX(d)α1SXα(d)α

Hence,

HαT(Xlt(d))=SXα(d)EfXλα1(x)1α1·SXα1(x)Id<X<+SXα(d)SX(d)α1SXα(d)α+
+1SX1α(d)α1=SXα(d)EfXλα1(x)1α1SXα1(x)Id<X<+1α.

Theorem 5. 

Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let αR{1}. The Tsallis entropy HαT(Xlt(d)) of the left-truncated loss random variable corresponding to the per-loss risk model with a deductible d is independent of d if, and only if, the hazard rate function is constant.

Proof. 

We assume that the hazard function is constant, therefore λ(x)=kR, for any x>0. It results that fX(x)=kSX(x), for any x>0 and, using (17), we obtain:

HαT(Xlt(d))=SXα(d)EfXλα1(x)1α1SXα1(x)Id<X<+1α
=1kα1(α1)SXα(d)EfXSXα1(x)Id<X<+1α=1kα1α(α1)+1α=αkα1α(α1),

which does not depend on d.

Conversely, assuming that HαT(Xlt(d)) does not depend on d,

HαT(Xlt(d))d=0.

Using (17), we obtain

αSXα1(d)f(d)EfXλα1(x)1α1SXα1(x)Id<X<+SX1(d)fX(d)λα1(d)1α1=0,

i.e.,

SXα(d)EfXλα1(x)1α1SXα1(x)Id<X<+λα1(d)1α(α1)=0.

Using (17) again, the last relation can be expressed as follows:

HαT(Xlt(d))1α+λα1(d)1α(α1)=0,

which implies

λα1(d)=αα(α1)HαT(Xlt(d))

therefore,

λ(d)=αα(α1)HαT(Xlt(d))1α1.

Using again the hypothesis that HαT(Xlt(d)) does not depend on d, it follows that λ does not depend on d, therefore λ is constant. □

3.2. Loss Models Involving Truncation or Censoring from Above

Right-truncated or censored from below random variables are used in actuarial models with policy limits. In this case, losses are not recorded or reported for or above a specified threshold. We denote by u,u>0 the value of the threshold, referred to as the policy limit or liability limit. According to Kluggman et. al [40], there are two approaches used to express the random variable which models the loss corresponding to the per-payment and per-loss cases, respectively.

In the per-payment case, losses or claims above the value of the liability limit may not be reported to the insurance company, generating truncated from above or right-truncated data.

We denote by Xrt(u) the right-truncated random variable which models the loss corresponding to an insurance policy limit u in the per-payment case. It can be expressed as Xrt(u)=X|X<u, or equivalently:

Xrt(u)=X,ifX<unotdefined,ifXu. (18)

The relationship between the Tsallis entropy of the random variables X and Xrt(d) is established in the following theorem.

Theorem 6. 

Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let αR{1}. The Tsallis entropy HαT(Xrt(u)) of the right-truncated loss random variable corresponding to the per-payment risk model with a policy limit u is given by:

HαT(Xrt(u))=FXα(u)HαT(X)+EfXfX(x)α11α1Iu<X<+1FX1α(u)α1. (19)

Proof. 

The probability density function of the random variable Xrt(u) is given by fXrt(u)(x)=fX(x)FX(u), 0<x<u. Therefore, the Tsallis entropy of the random variable Xrt(u) can be expressed as follows:

HαT(Xrt(u))=1(α1)FX(u)EfXfX(x)FX(u)α11I0<X<u=
=FXα(u)EfXfXα1(x)1α1I0<X<u+1FX1α(u)(α1)FX(u)EfXI0<X<u=
=FXα(u)EfXfXα1(x)1α1I0<X<u+1FX1α(u)α1
=FXα(u)EfXfXα1(x)1α1+EfXfXα1(x)1α1Iu<X<+1FX1α(u)α1=
=FXα(u)HαT(X)+EfXfXα1(x)1α1Iu<X<+1FX1α(u)α1.

In the following theorem, the Tsallis entropy of the right-truncated loss random variable corresponding to the per-payment risk model with a policy limit is derived.

Theorem 7. 

Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let αR{1} and u>0. The Tsallis entropy HαT(Xrt(u)) of the right-truncated loss random variable corresponding to the per-payment risk model with a policy limit u can be expressed in terms of the reversed hazard function as follows:

HαT(Xrt(u))=FXα(u)EfXτα1(x)1α1FXα1(x)I0<X<u (20)
FX1(u)EfXFXα1(x)1α1I0<X<u+1FX1α(u)α1. (21)

Proof. 

The probability density function of the random variable Xrt(u) is given by fXrt(u)(x)=fX(x)FX(u), 0<x<u. Therefore, the Tsallis entropy of the random variable Xrt(u) can be expressed as follows:

HαT(Xrt(u))=1(α1)FX(u)EfXfX(x)FX(u)α11I0<X<u=
=1(α1)FX(u)·
EfXfX(x)FX(x)α1FX(x)FX(u)α1FX(x)FX(u)α1+FX(x)FX(u)α11I0<X<u=
=1(α1)FX(u)EfXFX(x)FX(u)α1fX(x)FX(x)α11I0<X<u
1(α1)FX(u)EfXFX(X)FX(u)α11I0<X<u
=FXα(u)α1EfXFXα1(x)fX(x)FX(x)α11I0<X<u
1(α1)FX(u)EfXFX(x)FX(u)α1FX1α(u)+FX1α(u)1I0<X<u
=FXα(u)α1EfXFXα1(x)fX(x)FX(x)α11I0<X<u
FXα(u)EfXFXα1(x)1α1I0<X<u+1FX1α(u)(α1)FX(u)EfXI0<X<u=
=FXα(u)EfXτα1(x)1α1FXα1(x)I0<X<u
FXα(u)EfXFXα1(x)1α1I0<X<u+1FX1α(u)α1.

Now, we consider the case of the per-loss right censoring. In this case, if the loss exceeds the value of the policy limit, the insurance company pays an amount u.

For example, a car insurance policy covers losses up to a limit u, while major losses are covered by the car owner. If the loss is modeled by the random variable X, then the loss corresponding to the insurance company is represented by X|X<u. We note that the loss model with truncation from above is different from the loss model with censoring from above, which is defined by the random variable Xrc(u)=min{X,u}. In this case, if the loss is Xu, the insurance company pays an amount u.

The loss model with censoring from above is modeled using the random variable Xrc(u)=min{X,u}. Moreover, it can be represented as

Xrc(u)=X,ifX<uu,ifXu. (22)

This model, corresponding to the per-loss case, assumes that in the case where the loss is Xu, the insurance company pays an amount u. Therefore, the insurer pays a maximum amount of u on a claim. We note that the random variable Xrc(u) is not absolutely continuous.

In the following theorem, an analytical formula for the entropy corresponding to the random variable Xrc(u) is obtained.

Theorem 8. 

Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let αR{1} and u>0. The Tsallis entropy of losses for the right-censored loss random variable corresponding to the per-payment risk model with a policy limit u can be expressed as follows:

HαT(Xrc(u))=HαT(X)+EfXfX(x)α11α1Iu<X<SX(u)SX(u)α11α1. (23)

Proof. 

We have:

HαT(Xrc(u))=EfXfXα1(x)1α1I0<X<uSX(u)SXα1(u)1α1
=HαT(X)+EfXfXα1(x)1α1Iu<X<SX(u)SXα1(u)1α1.

3.3. Loss Models Involving Truncation from Above and from Below

We denote by d the deductible and by u the retention limit, with d<u. The deductible is applied after the implementation of the retention limit u. Therefore, if the value of the loss is grater than u, then the value of the maximum payment is ud. We denote by Xlr(d,u) the loss random variable which models the payments to the policy holder under a combination of deductible and retention limit policies. Xlr(d,u) is a mixed random variable, with an absolutely continuous part over the interval (0,ud) and two discrete parts at 0, with probability mass FX(d) and at ud and with probability mass SX(u). Following [40], the loss random variable Xlr(d,u) can be expressed by:

Xlr(d,u)=0,ifXdXd,ifd<Xuud,ifX>u, (24)

The deductible d is applied after the implementation of the retention limit u, which means that if the loss is greater than u, then the maximum payment is ud. The random variable Xlr(d,u) is a mixed variable with an absolutely continuous part over the interval (0,ud) and two discrete parts at 0, with probability mass FX(d) and at ud and with probability mass SX(u).

In the next theorem, the Tsallis entropy of losses for the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u is derived.

Theorem 9. 

Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let αR{1}, d>0 and u>d. The Tsallis entropy of losses of the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u is given by:

HαT(Xlr(d,u))=HαT(X)+EfXfX(x)α11α1I0<X<d+EfXfX(x)α11α1Iu<X<
FX(d)FX(d)α11α1SX(u)SX(u)α11α1 (25)

Proof. 

The probability density function of the random variable Xlr(d,u) is given by

fXlr(d,u)(x)=FX(d)δx=0+fX(x+d)δu<X<ud+SX(u)δx=ud (26)

where δ denotes the Dirac delta function.

It results:

HαTXlr(d,u)=FX(d)FXα1(d)1α1EfXfXα1(x)1α1Id<X<uSX(u)SXα1(u)1α1
=HαTX+EfXfXα1(x)1α1I0<X<d+EfXfXα1(x)1α1Iu<X<
FX(d)FXα1(d)1α1SX(u)SXα1(u)1α1.

The following theorem establishes the relationship between HαTXlr(d,u), the entropy under censoring from above HαTXrc(u) and the entropy under censoring from below HαTXlc(d).

Theorem 10. 

Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let αR{1}. For any d>0 and u>d, the Tsallis entropy HαTXlr(d,u) is related to the entropies HαTXrc(u) and HαTXlc(d) through the following relationship:

Hsl(fX,d,u)=FX(d)FXα1(d)1α1+EfXfXα1(x)1α1I0<X<dSX(u)SXα1(u)1α1+FXα(u)HαT(Xrc(u))FX(u)FXα1(u)1α1.

Proof. 

We have:

HαTXlr(d,u)=HαT(X)+EfXfXα1(x)1α1I0<X<d+EfXfXα1(x)1α1Iu<X<
FX(d)FXα1(d)1α1SX(u)SXα1(u)1α1.

Moreover,

HαT(Xrc(u))=HαT(X)+EfXfXα1(x)1α1Iu<X<SX(u)SXα1(u)1α1
HαT(Xlc(d))=HαT(X)+EfXfXα1(x)1α1I0<X<dFX(d)FXα1(d)1α1.

It results that:

Hsl(f,d,u)=HαT(Xrc(u))FX(d)Fxα1(d)1α1+EfXfXα1(x)1α1I0<X<d=
=FX(d)FXα1(d)1α1+EfXfxα1(x)1α1I0<X<d
SX(u)SXα1(u)1α1+FXα(u)HαT(Xrc(u))FX(u)FXα1(u)1α1.

Figure 2 illustrates the Tsallis entropy of the right-truncated loss random variable Xlr(d,u), corresponding to the per-loss risk model with a deductible d and a policy limit u for the exponential distribution with λ=0.1.

Figure 2.

Figure 2

The Tsallis entropy HαT(Xlr(d,u)) of losses for the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u for the exponential distribution, with λ=0.1 and different values of the Tsallis parameter α.

Figure 2 displays a similar behavior of the Tsallis entropy HαT(Xlr(d,u)) for all the considered values around 1 of the α parameter. Thus, we remark that, for all values of α, the Tsallis entropy HαT(Xlr(d,u)) is decreasing with respect to the deductible d and it does not depend on the policy limit u.

Figure 3 represents the Tsallis entropy of losses for the right-truncated loss random variable Xlr(d,u) corresponding to the per-loss risk model with a deductible d and a policy limit u for the χ2 distribution, with γ=30 and for different values of the Tsallis parameter α, in the case d<u.

Figure 3.

Figure 3

The Tsallis entropy of HαT(Xlr(d,u)) losses of the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u for the χ2 distribution, with γ=30 and different values of the Tsallis parameter α.

Figure 3 reveals, for all the values of the parameter α considered, a similar decreasing behavior with respect to the deductible d of the Tsallis entropy HαT(Xlr(d,u)). Moreover, it indicates that the Tsallis entropy HαT(Xlr(d,u)) does not depend on the values of the policy limit u.

Figure 4 depicts the Tsallis entropy of losses of the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u for the Weibull distribution, with γ=0.3,λ=1.3 and a=0 for different values of the Tsallis parameter α, in the case d<u.

Figure 4.

Figure 4

The Tsallis entropy of losses HαT(Xlr(d,u)) of the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u for the Weibull distribution, with γ=0.3,λ=1.3,a=0 and different values of the Tsallis parameter α.

Figure 4 highlights that the Tsallis entropy of losses HαT(Xlr(d,u)) is decreasing with respect to d for all the values of the parameter α considered. Moreover, the Tsallis entropy HαT(Xlr(d,u)) does not depend on the policy limit u for the values of the α parameter around 1, respectively, for α=0.9 and α=1.1. A different behavior is detected for α=0.5. In this case, we remark that the Tsallis entropy is increasing with respect to the policy limit u, which is realistic from the actuarial point of view. Indeed, increasing the policy limit results in a higher risk for the insurance company.

The conclusions obtained indicate that Tsallis entropy measures with parameter values significantly different from 1 can provide a better loss model involving truncation from above and from below.

Figure 5 displays the Tsallis entropy of losses for the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u for the Gamma distribution, with γ=4.5,λ=0.1 and a=0.01 for different values of the Tsallis parameter α, in the case d<u.

Figure 5.

Figure 5

The Tsallis entropy HαT(Xlr(d,u)) of losses of the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u for the Gamma distribution, with γ=4.5,λ=0.1,a=0.01 and different values of the Tsallis parameter α.

Figure 5 reveals the decreasing behavior of the Tsallis entropy HαT(Xlr(d,u)) of losses for all the values of the Tsallis parameter α considered. Moreover, for all the values of α, the Tsallis entropy HαT(Xlr(d,u)) does not depend on the policy limit u.

The following tables present the Tsallis entropy values for the Weibull distribution, corresponding to the analyzed models.

Table 1 illustrates the Tsallis entropy values in case of the Weibull distribution with λ=0.9585,γ=0.3192 and d=1.1 for different values of the Tsallis parameter α and several values of the policy limit u.

Table 1.

Tsallis entropy values for the Weibull distribution for: λ=0.9585,γ=0.3192,d=1.1.

α u HαT(X) HαT(Xlt(d)) HαT(Xlc(d)) HαT(Xrt(u)) HαT(Xrc(u)) HαT(Xlr(d,u))
10 5.434 5.5005 5.3837 3.7994 4.1067 4.0564
0.5 15 4.5725 4.7622 4.712
20 4.9845 5.09140 5.0411
25 5.2006 5.2582 5.2079
10 2.5446 2.5778 2.5156 2.2220 2.3504 2.3214
0.9 15 2.4306 2.488 2.4591
20 2.505432 2.52792 2.4989660524
25 2.5156 2.5314 2.5396
10 2.20865 2.2369 1.091 2.316 2.0827 2.0792
1 15 2.2534 2.1767 2.1732
20 2.22484 2.20041 2.1969
25 2.2140 2.2064 2.203
10 1.26474 1.278 1.2521 1.2094 1.24799 1.2353
1.5 15 1.2503 1.2626 1.2499
20 1.2609475715 1.2644 1.2518
25 1.2637 1.2647 1.2525
10 0.8459 0.8524 0.8396 0.8279 0.8433 0.837
2 15 0.8415 0.8457 0.83944
20 0.8448 0.8459 0.8395
25 0.8456 0.8459 0.8396

The analysis of the results presented in Table 1 reveals that for parameter values α1 the Tsallis entropy corresponding to the Xrt(u) random variable is increasing with respect to the value of the policy limit u. On the other side, for α=1, the Tsallis entropy, which reduces to the Shannon entropy measure, is decreasing with respect to u. From an actuarial perspective, when the policy limit increases, the risk of the insurance company also increases, therefore the entropy of losses increases, too. The detected behavior of the Tsallis entropy measure is reasonable in this case, and it means that the Tsallis entropy approach for evaluating the risk corresponding to the Xrt(u) random variable is more realistic.

Table 2 displays the values of the Tsallis entropy measures in case of the Weibull distribution with λ=0.9585,γ=0.3192 and d=1.1 for different values of the Tsallis parameter α and several values of the policy limit u.

Table 2.

Tsallis entropy values for the Weibull distribution for: λ=0.9585,γ=0.3192,d=1.2.

α u HαT(X) HαT(Xlt(d)) HαT(Xlc(d)) HαT(Xrt(u)) HαT(Xrc(u)) HαT(Xlr(d,u))
10 5.434 5.5043 5.33 3.7994 4.1067 4.0027
0.5 15 4.5725 4.7622 4.6583
20 4.98457 5.091 4.9874
25 5.2006 5.2582 5.1542
10 2.5446 2.5796 2.4829 2.222 2.35 2.2887
0.9 15 2.43 2.488 2.4264
20 2.5054 2.5279 2.4662
25 2.5314 2.5396 2.4779
10 2.20865 2.2384 1.0621 2.31601 2.0827 2.09548
1 15 2.2534 2.1767 2.1894
20 2.2248 2.2004 2.2131
25 2.214 2.2064 2.2192
10 1.26474 1.2786 1.2364 1.2094 1.2479 1.2197
1.5 15 1.2503 1.2626 1.2343
20 1.2609 1.2644 1.2362
25 1.26373 1.2647 1.2364
10 0.8459 0.85275 0.8311 0.82792 0.8433 0.8285
2 15 0.8415 0.8457 0.8309
20 0.8448 0.8459 0.8395
25 0.8448 0.8459 0.8311

Analyzing the results presented in Table 2, we remark that for parameter values α1 the Tsallis entropy corresponding to the Xrt(u) random variable is increasing with respect to the value of the policy limit u. On the other side, for α=1, the Tsallis entropy, which reduces to the Shannon entropy measure, is decreasing with respect to u. From an actuarial perspective, when the policy limit increases, the risk of the insurance company also increases, therefore the entropy of losses increases, too. The detected behavior of the Tsallis entropy measure is reasonable in this case, and it means that the Tsallis entropy approach for evaluating the risk corresponding to the Xrt(u) random variable is more realistic.

Table 3 illustrates the Tsallis entropy values in the case of the Weibull distribution with λ=0.9585,γ=0.3192 and deductible d=1.2 for various values of the Tsallis parameter α and several values of the policy limit u.

Table 3.

Tsallis entropy values for the Weibull distribution for: λ=0.9585,γ=0.3192,d=1.3.

α u HαT(X) HαT(Xlt(d)) HαT(Xlc(d)) HαT(Xrt(u)) HαT(Xrc(u)) HαT(Xlr(d,u))
10 5.434 5.508 5.2754 3.7994 4.1067 3.9481
0.5 15 4.5725 4.7622 4.6036
20 4.9845 5.0914 4.9328
25 5.2006 5.2582 5.0996
10 2.5446 2.5812 2.4491 2.222 2.3504 2.2549
0.9 15 2.4306 2.488 2.3926
20 2.5054 2.5279 2.4324
25 2.5314 2.5396 2.4441
10 2.2086 2.2398 1.03212 2.316 2.08275 2.11294
1 15 2.2534 2.1767 2.20691
20 2.2248 2.2004 2.23
25 2.214 2.2064 2.2366
10 1.2647 1.2792 1.2199 1.2094 1.2479 1.2031
1.5 15 1.2503 1.2626 1.2178
20 1.2609 1.2644 1.2196
25 1.2637 1.2647 1.2199
10 0.8459 0.853 0.8219 0.8279 0.8433 0.8193
2 15 0.8415 0.8457 0.8218
20 0.8448 0.8459 0.8219
25 0.8456 0.8459 0.8219

The study of the results presented in Table 3 reveals that for parameter values α1 the Tsallis entropy corresponding to the Xrt(u) random variable is increasing with respect to the value of the policy limit u. On the other side, for α=1, the Tsallis entropy, which reduces to the Shannon entropy measure, is decreasing with respect to u. From an actuarial perspective, when the policy limit increases, the risk of the insurance company also increases, therefore the entropy of losses increases, too. The detected behavior of the Tsallis entropy measure is reasonable in this case, and it means that the Tsallis entropy approach for evaluating the risk corresponding to the Xrt(u) random variable is more realistic.

Table 4 reveals the values of all the Tsallis entropy measures analyzed in the case of the Weibull distribution with λ=0.9585,γ=0.3192 and d=1.3 for several values of the Tsallis parameter α and different values of the policy limit u.

Table 4.

Tsallis entropy values for the Weibull distribution for: λ=0.9585,γ=0.3192,d=1.4.

α u HαT(X) HαT(Xlt(d)) HαT(Xlc(d)) HαT(Xrt(u)) HαT(Xrc(u)) HαT(Xlr(d,u))
10 5.43403 5.51157 5.2201 3.79949502 4.106747228 3.892825
0.5 15 4.57256 4.762289 4.54836725
20 4.98450397 5.0914 4.87748487
25 5.200639 5.25823 5.04431057
10 2.54465658 2.5828658 2.414517 2.22206 2.3504043 2.220265
0.9 15 2.430659 2.488092 2.35795
20 2.50543 2.5279 2.39778476
25 2.531 2.539648 2.4095
10 2.20865 2.24113 1.00127 2.31601 2.08275 2.13136
1 15 2.25345 2.17672 2.22534
20 2.22484 2.20041 2.24902
25 2.21409 2.20649 2.25511
10 1.2647449 1.2798459 1.20262623 1.209498976566 1.24799613 1.1858774
1.5 15 1.2626 1.2626 1.2004
20 1.2609 1.2644668 1.2023481
25 1.2637 1.2647078 1.2025891
10 0.8459 0.853296 0.812209 0.82795 0.84332 0.8096
2 15 0.84159 0.8457620 0.8120436
20 0.84482638 0.84591692 0.81219852
25 0.84564 0.84592715 0.8122087

The results displayed in Table 4 show that for α1 the Tsallis entropy of the Xrt(u) random variable increases with respect to the value of the policy limit u, whereas for α=1, the entropy decreases with respect to u. It indicates that, when the policy limit increases, the risk of the insurance company increases, too. Thus, the entropy of losses is increasing. We can also conclude that in this case the right-truncated loss random variable Xrt(u) is better modeled using Tsallis entropy measure.

Table 5 displays the Tsallis entropy values in case of the Weibull distribution with λ=0.9585,γ=0.3192 and deductible d=1.4 for different values of the Tsallis parameter α and several values of the policy limit u.

Table 5.

Tsallis entropy values for the Weibull distribution for: λ=0.9585,γ=0.3192 and d=1.5.

α u HαT(X) HαT(Xlt(d)) HαT(Xlc(d)) HαT(Xrt(u)) HαT(Xrc(u)) HαT(Xlr(d,u))
10 5.43403331 5.51497466 5.16428 3.799495 4.1067472 3.8369939886
0.5 15 4.57256 4.762289 4.4925361
20 4.98450397 5.0914 4.82165
25 5.2006 5.25823 4.9884794
10 2.54465658 2.5843899676 2.3792109 2.22206 2.3504 2.1849586855
0.9 15 2.430659 2.488092 2.322646
20 2.50543 2.52792 2.362478
25 2.531 2.5396 2.3742
10 2.20865 2.24240 0.96975 2.31601 2.08275 2.15055
1 15 2.25345 2.17672 2.24452
20 2.22484 2.20041 2.26821
25 2.21409 2.20649 2.27429
10 1.2647449 1.2803868 1.184659 1.209 1.24799613 1.16791
1.5 15 1.25035 1.2626 1.18253
20 1.2609 1.2644668 1.18438
25 1.2637 1.2647 1.18462
10 0.8459 0.85354 0.801878 0.82795 0.8433243 0.79927
2 15 0.8415947 0.845762 0.8017
20 0.844826 0.8459169 0.801867
25 0.84564 0.845927 0.801877

Analyzing the results provided in Table 5, we remark that for the parameter α1 the Tsallis entropy corresponding to the right-truncated random variable is increasing with respect to the value of the policy limit u. For α=1, the Shannon entropy measure decreases with respect to u. From an actuarial perspective, when the policy limit increases, the risk of the insurance company also increases, therefore the entropy of losses increases, too. The detected behavior of the Tsallis entropy measure is reasonable in this case, and it means that the Tsallis entropy approach for evaluating the risk corresponding to the Xrt(u) random variable is more realistic.

From Table 1, Table 2, Table 3, Table 4 and Table 5, we draw the following conclusions. Using the Tsallis entropy measure approach, in the case when the deductible value d increases, the uncertainty of losses for the insurance company will decrease, therefore the company has to pay smaller amounts. In the case when the policy limit value u increases, the uncertainty of losses for the insurance company will increase, as the company has to pay greater amounts. Therefore, the Tsallis entropy approach is more realistic and flexible, providing a relevant perspective and a useful instrument for loss models.

3.4. Loss Models under Inflation

Financial and actuarial models are estimated using observations made in the past years. As inflation implies an increase in losses, the models must be adjusted corresponding to the current level of loss experience. Moreover, a projection of the anticipated losses in the future needs to be performed.

Now, we study the effect of inflation on entropy. Let X be the random variable that models the loss corresponding to a certain year. We denote by F the cumulative distribution function of X and by f the probability density function of X. The random variable that models the loss after one year and under the inflation effect is X(r)=(1+r)X, where r,r>0, represents the annual inflation rate. We denote by FX(r) the cumulative distribution function of X(r) and by fX(r) the probability density function of the random variable X(r).

The probability density function corresponding to the random variable X(r) is given by:

fX(r)z=11+rfXz1+r,zR.

The following theorem derives the relationship between the Tsallis entropies of the random variables X and X(r)=(1+r)X.

Theorem 11. 

Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let αR{1}. The Tsallis entropy of the random variable X(r), which models the loss after one year under inflation rate r, r>0, is given by

HαTX(r)=1+r1αHαTX1+r1α1α1. (27)

Proof. 

Using the definition of the Tsallis entropy, we have:

HαTX(r)=EfXfX(r)α1(x)1α1.

Using the change in variable given by u=z1+r, it follows

HαTX(r)=1+r1αEfXfXα1(x)1α11+r1α1α1=
=1+r1αHαTX1+r1α1α1

Theorem 12. 

Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let αR{1}. For r>0, the Tsallis entropy of the random variable X(r), which models the loss after one year under inflation rate r,r>0, is always larger than that of X and is an increasing function of r.

Proof. 

Let r>0. We denote by

ψ(r)=HαTX(r)HαTX
=1+r1αHαTXHαTX1+r1α1α1=1+r1α1HαT(X)1α1

We have:

ddrψ(r)=1+r2α(α1)HαT(X)1
=1+r2αEfXfXα1(x)>0,

so that HαTX(r) is an increasing function of r.

Therefore, it follows that

HαTX(r)>HαTX.

The results obtained show that inflation increases the entropy, which means that the uncertainty degree of losses increases compared with the case without inflation. Moreover, the uncertainty of losses increases with respect to the inflation rate.

4. Tsallis Entropy Approach for Survival Models

In this section, we derive residual and past entropy expressions for some survival models, including the proportional hazard and the proportional reversed hazard models. Relevant results in this field have been obtained by Sachlas and Papaioannou [18], Gupta and Gupta [19], Di Crescenzo [41] and Sankaran and Gleeja [42].

Let X and Y be random variables with cumulative distribution functions F and G, probability density functions f and g and survival functions F¯ and G¯, respectively. We denote by λX and λY the hazard rate functions of the random variables X and Y, respectively.

4.1. The Proportional Hazard Rate Model

Definition 3. 

The random variables X and Y satisfy the proportional hazard rate model if there exists θ>0 such that (see Cox [43]).

SY(x)=SXθ(x)foreveryx>0. (28)

We note that the random variables X and Y satisfy the proportional hazard rate model if the hazard rate function of Y is proportional to the hazard rate function of X, i.e., λY(x)=θλX(x) for every x>0; see Cox [43].

In the next theorem, the Tsallis entropy of the left-truncated random variable Ylt(d) under the proportional hazard rate model is derived.

Theorem 13. 

Let X and Y be non-negative random variables. Let αR{1} and d>0. Under the proportional hazard rate model given in (28), the Tsallis entropy of the left-truncated random variable Ylt(d) corresponding to the per-payment risk model with a deductible d can be expressed as follows:

HαTYlt(d)=θα1SXθ(d)EfXSXθ1(x)θSXθ1(x)fX(x)SXθ(d)α11Id<X<. (29)

Proof. 

From (28), we obtain fY(x)=θSXθ1(x)fX(x). It results:

HαT(Ylt(d))=1α1SY(d)EfYfY(x)SY(d)α11Id<X<
=θα1SXθ(d)EfXSXθ1(x)θSXθ1(x)fX(x)SXθ(d)α11Id<X<.

4.2. The Proportional Reversed Hazard Rate Model

Definition 4. 

The random variables X and Y satisfy the proportional reversed hazard rate model [43] if there exists θ>0 such that

FY(x)=FXθ(x)foreveryx>0. (30)

In the next theorem, the Tsallis entropy of the right-truncated random variable Yrt(u) under the proportional reversed hazard rate model is derived.

Theorem 14. 

Let X and Y be non-negative random variables. Let αR{1} and u>0. Under the proportional reversed hazard rate model given in (30), the Tsallis entropy of the right-truncated random variable Yrt(u) corresponding to the per-payment risk model with a policy limit u can be expressed as follows:

HαTYrt(d)=θα1FXθ(u)EfXFXθ1(x)θFXθ1(x)fX(x)FXθ(u)α11I0<X<u. (31)

Proof. 

From (30) we get

fY(x)=θFXθ1(x)fX(x).

It results:

HαT(Yrt(d))=1α1FY(u)EfYfY(x)FY(u)α11I0<X<u
=θα1FXθ(u)EfXFXθ1(x)θFXθ1(x)fX(x)FXθ(u)α11I0<X<u.

5. Applications

We used a real database from [18], representing the Danish fire insurance losses recorded during the 1980–1990 period [44,45,46], where losses are ranged from MDKK 1.0 to 263.250 (millions of Danish Krone). The average loss is MDKK 3.385, while 25% of losses are smaller than MDKK 1.321 and 75% of losses are smaller than MDKK 2.967.

The data from the database [18] were fitted by using a Weibull distribution and the maximum likelihood estimators of the shape c^=0.3192, and scale parameters of the distribution τ^=0.9585 were obtained.

The results displayed in Table 1, Table 2, Table 3, Table 4 and Table 5 can be used to compare the values of the following entropy measures:

  • The Tsallis entropy HαT(X) corresponding to the random variable X which models the loss;

  • The Tsallis entropy of the left-truncated loss and, respectively, censored loss random variable corresponding to the per-payment risk model with a deductible d, namely HαT(Xlt(d)) and, respectively, HαT(Xlc(d));

  • The Tsallis entropy of the right-truncated and, respectively, censored loss random variable corresponding to the per-payment risk model with a policy limit u, denoted by HαT(Xrt(u)) and, respectively, HαT(Xrc(u));

  • The Tsallis entropy of losses of the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u, HαT(Xlr(d,u)).

In the case of the Weibull distribution, for the parameter values λ=0.9585 and γ=0.3192 for d=1.11.5,u=10,u=15,u=20,u=25 and for different values of the Tsallis entropy parameter α located in the neighborhood of the point 1, we draw the following conclusions. The values of the Tsallis entropy for α=1 correspond to those obtained in [18]. Moreover, we remark that, for values of the Tsallis parameter α lower than 1, the values of the corresponding entropy measures increase. Moreover, for values of the parameter α greater than 1, the values of the corresponding entropy measures decrease, as we can notice from Figure 3, too. This behavior allows a higher degree of flexibility for modeling the loss-truncated and loss-censored random variables in actuarial models.

6. Conclusions

In this paper, an entropy-based approach for risk assessment in the framework of loss models and survival models involving truncated and censored random variables was developed.

By using the Tsallis entropy, the effect of some partial insurance schemes, such as inflation, truncation and censoring from above and truncation and censoring from below was investigated.

Analytical expressions for the per-payment and per-loss entropies of losses were derived. Moreover, closed formulas for the entropy of losses corresponding to the proportional hazard rate model and the proportional reversed hazard rate model were obtained.

The results obtained point out that entropy depends on the deductible and the policy limit, and inflation increases entropy, which means the increase in the uncertainty degree of losses increases compared with the case without inflation. The use of entropy measures allows risk assessment for actuarial models involving truncated and censored random variables.

We used a real database representing the Danish fire insurance losses recorded between 1980 and 1990 [44,45,46], where losses range from MDKK 1.0 to 263.250 (millions of Danish Krone). The average loss is MDKK 3.385, while 25% of losses are smaller than MDKK 1.321 and 75% of losses are smaller than MDKK 2.967.

The data were fitted using the Weibull distribution in order to obtain the maximum likelihood estimators of the shape c^=0.3192 and scale parameters of the distribution τ^=0.9585.

The values of the Tsallis entropies for α=1 correspond to those from [18], while as the α is lower than 1 the values of the entropies will increase and, as the α is bigger than 1, the values of the entropies will decrease, as we can notice from the Figure 3, too.

The paper extends several results obtained in this field; see, for example, Sachlas and Papaioannou [18].

The study of the results obtained reveals that for parameter values α1 the Tsallis entropy corresponding to the right-truncated loss random variable is increasing with respect to the value of the policy limit u. On the other side, for α=1, the Tsallis entropy, which reduces to the Shannon entropy measure, is decreasing with respect to u. From an actuarial perspective, when the policy limit increases, the risk of the insurance company also increases, therefore the entropy of losses increases, too. The detected behavior proves that the Tsallis entropy approach for evaluating the risk corresponding to the right-truncated loss random variable is more realistic.

Therefore, we can conclude that the Tsallis entropy approach for actuarial models involving truncated and censored random variables provides a new and relevant perspective, since it allows a higher degree of flexibility for the assessment of risk models.

Acknowledgments

The authors would like to express their gratitude to the anonymous referees for their valuable suggestions and comments.

Author Contributions

All authors contributed equally to the paper. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Funding Statement

This work was supported by a grant of the Romanian Ministery of Education and Research, CNCS—UEFISCDI, project number PN-III-P4-ID-PCE-2020-1112, within PNCDI III.

Footnotes

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Koukoumis C., Karagrigoriou A. On Entropy-type Measures and Divergences with Applications in Engineering, Management and Applied Sciences. Int. J. Math. Eng. Manag. Sci. 2021;6:688–707. doi: 10.33889/IJMEMS.2021.6.3.043. [DOI] [Google Scholar]
  • 2.Iatan I., Dragan M., Preda V., Dedu S. Using Probabilistic Models for Data Compression. Mathematics. 2022;10:3847. doi: 10.3390/math10203847. [DOI] [Google Scholar]
  • 3.Li S., Zhuang Y., He J. Stock market stability: Diffusion entropy analysis. Phys. A. 2016;450:462–465. doi: 10.1016/j.physa.2016.01.037. [DOI] [Google Scholar]
  • 4.Miśkiewicz J. Improving quality of sample entropy estimation for continuous distribution probability functions. Phys. A. 2016;450:473–485. doi: 10.1016/j.physa.2015.12.106. [DOI] [Google Scholar]
  • 5.Toma A., Karagrigoriou A., Trentou P. Robust Model Selection Criteria Based on Pseudodistances. Entropy. 2020;22:304. doi: 10.3390/e22030304. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Moretto E., Pasquali S., Trivellato E. Option pricing under deformed Gaussian distributions. Phys. A. 2016;446:246–263. doi: 10.1016/j.physa.2015.11.026. [DOI] [Google Scholar]
  • 7.Remuzgo L., Trueba C., Sarabia S.M. Evolution of the global inequality in greenhouse gases emissions using multidimensional generalized entropy measures. Phys. A. 2015;444:146–157. doi: 10.1016/j.physa.2015.10.017. [DOI] [Google Scholar]
  • 8.Sheraz M., Dedu S., Preda V. Volatility Dynamics of Non-Linear Volatile Time Series and Analysis of Information Flow: Evidence from Cryptocurrency Data. Entropy. 2022;24:1410. doi: 10.3390/e24101410. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Toma A., Leoni-Aubin S. Robust portfolio optimization using pseudodistances. PLoS ONE. 2015;10:e0140546. doi: 10.1371/journal.pone.0140546. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Nayak A.S., Rajagopal S.A.K., Devi A.R.U. Bipartite separability of symmetric N-qubit noisy states using conditional quantum relative Tsallis entropy. Phys. A. 2016;443:286–295. doi: 10.1016/j.physa.2015.09.086. [DOI] [Google Scholar]
  • 11.Pavlos G.P., Iliopoulos A.C., Zastenker G.N., Zelenyi L.M. Tsallis non-extensive statistics and solar wind plasma complexity. Phys. A. 2015;422:113–135. doi: 10.1016/j.physa.2014.12.007. [DOI] [Google Scholar]
  • 12.Singh V.P., Cui H. Suspended sediment concentration distribution using Tsallis entropy. Phys. A. 2014;414:31–42. doi: 10.1016/j.physa.2014.06.075. [DOI] [Google Scholar]
  • 13.Balakrishnan N., Buono F., Longobardi M. A unified formulation of entropy and its application. Phys. A. 2022;596:127214. doi: 10.1016/j.physa.2022.127214. [DOI] [Google Scholar]
  • 14.Ebrahimi N. How to measure uncertainty in the residual life distributions. Sankhya. 1996;58:48–57. [Google Scholar]
  • 15.Ebrahimi N., Pellerey F. New partial ordering of survival functions based on the notion of uncertainty. J. Appl. Probab. 1995;32:202–211. doi: 10.2307/3214930. [DOI] [Google Scholar]
  • 16.Baxter L.A. A note on information and censored absolutely continuous random variables. Stat. Decis. 1989;7:193–197. doi: 10.1524/strm.1989.7.12.193. [DOI] [Google Scholar]
  • 17.Zografos K. On some entropy and divergence type measures of variability and dependence for mixed continuous and discrete variables. J. Stat. Plan. Inference. 2008;138:3899–3914. doi: 10.1016/j.jspi.2008.02.011. [DOI] [Google Scholar]
  • 18.Sachlas A., Papaioannou T. Residual and past entropy in actuarial science. Methodol. Comput. Appl. Probab. 2014;16:79–99. doi: 10.1007/s11009-012-9300-0. [DOI] [Google Scholar]
  • 19.Gupta R.C., Gupta R.D. Proportional reversed hazard rate model and its applications. J. Stat. Plan. Inference. 2007;137:3525–3536. doi: 10.1016/j.jspi.2007.03.029. [DOI] [Google Scholar]
  • 20.Di Crescenzo A., Longobardi M. Entropy-based measure of uncertainty in past lifetime distributions. J. Appl. Probab. 2002;39:430–440. doi: 10.1017/S002190020002266X. [DOI] [Google Scholar]
  • 21.Messelidis C., Karagrigoriou A. Contingency Table Analysis and Inference via Double Index Measures. Entropy. 2022;24:477. doi: 10.3390/e24040477. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Anastassiou G., Iatan I.F. Modern Algorithms of Simulation for Getting Some Random Numbers. J. Comput. Anal. Appl. 2013;15:1211–1222. [Google Scholar]
  • 23.Pardo L. Statistical Inference Based on Divergence Meaures. Chapman & Hall/CRC; Boca Raton, FL, USA: 2006. [Google Scholar]
  • 24.Toma A. Model selection criteria using divergences. Entropy. 2014;16:2686–2698. doi: 10.3390/e16052686. [DOI] [Google Scholar]
  • 25.Belzunce F., Navarro J., Ruiz J., del Aguila Y. Some results on residual entropy function. Metrika. 2004;59:147–161. doi: 10.1007/s001840300276. [DOI] [Google Scholar]
  • 26.Vonta F., Karagrigoriou A. Generalized measures of divergence in survival analysis and reliability. J. Appl. Probab. 2010;47:216–234. doi: 10.1239/jap/1269610827. [DOI] [Google Scholar]
  • 27.Tsallis C. Possible generalization of Boltzmann–Gibbs statistics. J. Stat. Phys. 1988;52:479–487. doi: 10.1007/BF01016429. [DOI] [Google Scholar]
  • 28.Tsallis C., Mendes R.S., Plastino A.R. The role of constraints within generalized nonextensive statistics. Phys. A. 1998;261:534–554. doi: 10.1016/S0378-4371(98)00437-3. [DOI] [Google Scholar]
  • 29.Tsallis C., Anteneodo A., Borland L., Osorio R. Nonextensive statistical mechanics and economics. Phys. A. 2003;324:89–100. doi: 10.1016/S0378-4371(03)00042-6. [DOI] [Google Scholar]
  • 30.Tsallis C. Introduction to Nonextensive Statistical Mechanics. Springer Science Business Media, LLC; Berlin/Heidelberg, Germany: 2009. [Google Scholar]
  • 31.Furuichi S. Information theoretical properties of Tsallis entropies. J. Math. Phys. 2006;47:023302. doi: 10.1063/1.2165744. [DOI] [Google Scholar]
  • 32.Furuichi S. On uniqueness theorems for Tsallis entropy and Tsallis relative entropy. IEEE Trans. Inf. Theory. 2005;51:3638–3645. doi: 10.1109/TIT.2005.855606. [DOI] [Google Scholar]
  • 33.Trivellato B. Deformed exponentials and applications to finance. Entropy. 2013;15:3471–3489. doi: 10.3390/e15093471. [DOI] [Google Scholar]
  • 34.Trivellato B. The minimal k-entropy martingale measure. Int. J. Theor. Appl. Financ. 2012;15:1250038. doi: 10.1142/S0219024912500380. [DOI] [Google Scholar]
  • 35.Preda V., Dedu S., Sheraz M. New measure selection for Hunt-Devolder semi-Markov regime switching interest rate models. Phys. A. 2014;407:350–359. doi: 10.1016/j.physa.2014.04.011. [DOI] [Google Scholar]
  • 36.Preda V., Dedu S., Gheorghe C. New classes of Lorenz curves by maximizing Tsallis entropy under mean and Gini equality and inequality constraints. Phys. A. 2015;436:925–932. doi: 10.1016/j.physa.2015.05.092. [DOI] [Google Scholar]
  • 37.Miranskyy A.V., Davison M., Reesor M., Murtaza S.S. Using entropy measures for comparison of software traces. Inform. Sci. 2012;203:59–72. doi: 10.1016/j.ins.2012.03.017. [DOI] [Google Scholar]
  • 38.Preda V., Dedu S., Sheraz M. Second order entropy approach for risk models involving truncation and censoring. Proc. Rom.-Acad. Ser. Math. Phys. Tech. Sci. Inf. Sci. 2016;17:195–202. [Google Scholar]
  • 39.Shannon C.E. A mathematical theory of communication. BellSyst. Tech. J. 1984;27:379–423. [Google Scholar]
  • 40.Klugman S.A., Panjer H.H., Willmot G.E. Loss Models: From Data to Decisions. John Wiley and Sons; New York, NY, USA: 2004. [Google Scholar]
  • 41.Di Crescenzo A. Some results on the proportional reversed hazards model. Stat. Probab. Lett. 2000;50:313–321. doi: 10.1016/S0167-7152(00)00127-9. [DOI] [Google Scholar]
  • 42.Sankaran P.G., Gleeja C.L. Proportional reversed hazard and frailty models. Metrika. 2008;68:333–342. doi: 10.1007/s00184-007-0165-0. [DOI] [Google Scholar]
  • 43.Cox D.R. Regression models and life-tables. J. R. Stat. Soc. 1972;34:187–220. doi: 10.1111/j.2517-6161.1972.tb00899.x. [DOI] [Google Scholar]
  • 44.McNeil A.J. Estimating the tails of loss severity distributions using extreme value theory. ASTIN Bull. 1997;27:117–137. doi: 10.2143/AST.27.1.563210. [DOI] [Google Scholar]
  • 45.Pigeon M., Denuit M. Composite Lognormal-Pareto model with random threshold. Scand. Actuar. J. 2011;3:177–192. doi: 10.1080/03461231003690754. [DOI] [Google Scholar]
  • 46.Resnick S.I. Discussion of the Danish data on large fire insurance losses. ASTIN Bull. 1997;27:139–151. doi: 10.2143/AST.27.1.563211. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Not applicable.


Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES