Skip to main content
Computational Intelligence and Neuroscience logoLink to Computational Intelligence and Neuroscience
. 2018 Jun 12;2018:1404067. doi: 10.1155/2018/1404067

An Approach to Linguistic Multiple Attribute Decision-Making Based on Unbalanced Linguistic Generalized Heronian Mean Aggregation Operator

Bing Han 1, Huayou Chen 1,, Jiaming Zhu 1, Jinpei Liu 2,3
PMCID: PMC6020671  PMID: 30008739

Abstract

This paper proposes an approach to linguistic multiple attribute decision-making problems with interactive unbalanced linguistic assessment information by unbalanced linguistic generalized Heronian mean aggregation operators. First, some generalized Heronian mean aggregation operators with unbalanced linguistic information are proposed, involving the unbalanced linguistic generalized arithmetic Heronian mean operator and the unbalanced linguistic generalized geometric Heronian mean operator. For the situation that the input arguments have different degrees of importance, the unbalanced linguistic generalized weighted arithmetic Heronian mean operator and the unbalanced linguistic generalized weighted geometric Heronian mean operator are developed. Then we investigate their properties and some particular cases. Finally, the effectiveness and universality of the developed approach are illustrated by a low-carbon tourist instance and comparison analysis. A sensitivity analysis is performed as well to test the robustness of proposed methods.

1. Introduction

As an important part of multicriteria decision-making, multiple attribute decision-making (MADM) [1] and multiobjective decision-making build up the multicriteria decision-making system. The MADM concentrates research on discrete finite alternatives. The essence of MADM is ranking for the given alternatives and selecting the most desirable one. In order to integrate the individual preference value into a collective one, various operators have been presented during the past few years, such as the ordered weighted average (OWA) operator [2] which pays attention to the ordered position of each input datum, the ordered weighted geometric (OWG) operator [3], the dependent uncertain OWA (DUOWA) operator [4, 5], and the generalized OWA (GOWA) operator by adding an attitude parameter [6]. Zhou and Chen [7] investigated the continuous generalized OWA operator. Merigo [8, 9] presented the induced uncertain heavy OWA operators and induced generalized OWA (IGOWA) operator by using induced variables. Liao and Xu [10, 11] investigated the hybrid aggregation operators which consider the weight of arguments and their positions simultaneously. Liu et al. [12] presented some q-Rung Orthopair Fuzzy Aggregation Operators which could describe the space of uncertain information broadly.

However, the above aggregation operators have one thing in common: all input arguments are irrelevant, which is not realistic. The Heronian mean (HM) operator can overcome the drawback and has been improved to be an aggregation operator in [13]. Subsequently, a new range of extensions have been proposed, like the generalized Heronian mean (GHM) operator [14, 15], the intuitionistic fuzzy geometric HM (IFGHM) operator [16], the uncertain linguistic Heronian mean (ULHM) aggregation operators [17, 18], partitioned Heronian means operator [19], and Heronian aggregation operators of intuitionistic fuzzy numbers [20]. The Heronian mean operator has some particular characteristics that the others do not have. Contrasting the Choquet integral or power average operator which stresses on weight changes subjectively or objectively, Heronian mean focuses on aggregated arguments themselves. For a set of criteria values ei(i = 1, ⋯, n), the Bonferroni mean operator only considers the correlation between ei and ej(ij). However, the relationship between ei and itself can not be considered. The Heronian mean operator can solve the correlation of both the different criteria values ei, ej(i < j) and the criteria value ei itself.

With the development of society, the decision-making information is more and more fuzzy or uncertain [21, 22]. It is more suitable and reasonable to express the preference in the form of linguistic information rather than real number. Some fuzzy linguistic approaches were firstly introduced by Zadeh [23]. Later on, a series of extended linguistic term sets have been developed, such as intuitionistic linguistic term set (ILTS) [2428], 2-tuple linguistic term set (2TLTS) [19, 2933], virtual linguistic term set (VLTS) [34], probabilistic linguistic term set (PLTS) [35], and hesitant fuzzy linguistic tern set (HFLTS) [36]. The ILTS was introduced by Wang and Li [36] which has three main parts: linguistic terms, membership function, and nonmembership function. Herrera and Martinez [37] presented 2-tuple linguistic (2TL) model which can avoid information loss validly. To preserve all the given information, Xu [34] extended the original linguistic term set to a continuous linguistic term set and introduced the concept of the virtual linguistic term. Some researchers have reported that the computational models of both the 2-tuple linguistic model and the virtual linguistic model are equivalent [38, 39]. In consideration of the possible uncertainties in linguistic expression, the probability linguistic term set (PLTS) [35] was developed through adding the probabilities without loss of any original linguistic variables. The HFLTS, combining the LTS and the HFS, was introduced by Rodríguez et al. [40]. It is a more reasonable information expression form, which can be used to describe people's subjective cognitions.

Obviously, the above linguistic aggregation operators are based upon symmetrically and uniformly placed linguistic term set. However, it is necessary to give evaluations by using nonsymmetrically and nonuniformly distributed linguistic terms [41] in some cases. For example, when assessing a person's ability, the linguistic term set used by experts is {extremely bad, bad, medium, almost good, good, quite good, very good, extremely good, perfect}. The number of the terms lying on the left of the central term “medium” (two) is less than that on the right one (six). To overcome the drawback, the unbalanced linguistic representation model has been presented in [42]. Subsequently, the unbalanced linguistic aggregation operators were introduced, for instance, the unbalanced linguistic OWG (ULOWG) operator [43], the unbalanced linguistic weighted OWA (ULWOWA) operator [41], and the unbalanced linguistic power average (ULPA) operator [44]. Furthermore, unbalanced linguistic aggregation operators in risk analysis were also investigated in [45, 46].

Through the above analysis, it is very important and necessary to develop the Heronian mean operator to cope with unbalanced linguistic information. Thus, the aim of this paper is to solve multiple attribute decision-making problems in which the evaluation information is correlative unbalanced linguistic information by combining the Heronian mean operator with unbalanced linguistic variables. We first introduce the unbalanced linguistic generalized arithmetic Heronian mean (ULGAHM) operator and the unbalanced linguistic generalized geometric Heronian mean (ULGGHM) operator. The most crucial advantage of these operators is that they could take into account correlation of input variables and deal with unbalanced linguistic information. For the situation that different attributes have different degrees of importance, the unbalanced linguistic generalized weighted arithmetic Heronian mean (ULGWAHM) operator and the unbalanced linguistic generalized weighted geometric Heronian mean (ULGWGHM) operator are presented and applied to MADM problems. The motivation of this paper is reposed on the following facts:

(i) The existing aggregation operators with unbalanced linguistic information are mainly concentrated on the OWA and OWG operator. There was less research about Heronian mean operator with unbalanced linguistic information.

(ii) The generalized Heronian mean aggregation operators can reflect the relationship of both the different criteria values ei, ej(i < j) and the criteria value ei itself. In addition, they have flexible parameters p and q, and we could select the appropriate p and q to meet the different actual requirements.

(iii) Zou [43] just considered the weights of criteria in unbalanced linguistic environment. Meng [44] considered the weights of both experts and attributes. However, both of them ignore the relationship of input arguments. The multiple attribute decision-making [43, 44] can not deal with the situation where the assessment is in form of interrelated unbalanced linguistic information. Jiang [45] emphasized the changing of the weight of aggregation operator not the input arguments themselves. These new Heronian mean operators with unbalanced linguistic information could be used to solve above cases effectively.

The rest of the paper is arranged as follows: Section 2 introduces some basic concepts and notions. Some operational laws for unbalanced linguistic 2-tuple are defined in Section 3. In Section 4, some existing Heronian mean operators are reviewed and further we developed some new unbalanced linguistic generalized Heronian mean operators and investigated the properties and particular cases. Section 5 presents the multiple attribute decision-making problem with unbalanced linguistic information. Subsequently, an actual example is given in Section 6. Section 7 concludes the comparison analyses with other methods. Finally, the paper is summarized in Section 8.

2. Preliminaries

In this section, we briefly review the linguistic approach and the unbalanced linguistic representation model.

2.1. The Linguistic Approach

As an approximate technique, the linguistic approach [23] expresses the qualitative information in form of linguistic values of linguistic labels. Let S = {s0, s1,…, sg} be a linguistic term set. The label sα represents a possible value of linguistic labels. For instance, a linguistic term set of seven labels could be given as follows:

S=s0=noneN,s1=verybadVB,s2=badB,s3=mediumM,s4=goodG,s5=verygoodVG,s6=perfectP (1)

where the central label s3 represents the mediocre comment and the others sit on either side of the central one symmetrically and uniformly. Generally, S should meet the following features:

  1. A negation operator: Neg(si) = sg+1−i.

  2. An order: sisj if and only if ij.

  3. A max operator: max(si, sj) = si if sisj; a min operator: min(si, sj) = si if sisj.

In order to avoid information loss effectively, Herrera [37] introduced the 2-tuple fuzzy linguistic representation model which is composed of a linguistic label si and a real number α ∈ [−0.5,0.5) denoting the value of symbolic translation.

Definition 1 (see [37]). —

Let S = {s0, s1,…, sg} be a linguistic term set and let β ∈ [0, g] be a number value representing the aggregation result of linguistic symbolic. Then the function Δ is defined as follows:

Δ:0,gS×0.5,0.5 (2)
Δβ=si,αwith  siα=βii=roundβα0.5,0.5 (3)

where round (·) is the integer operator, si is the closest index label to β, and α is the value of symbolic translation.

Definition 2 (see [37]). —

Let S = {s0, s1,…, sg} be a linguistic term set and let (si, αi) be a linguistic 2-tuple. Then the equivalent numerical value β ∈ [0, g] to a 2-tuple (si, αi) can be obtained by the following function Δ−1 : S × [−0.5,0.5)→[0, g]

Δ1si,α=α+i=β (4)

We can convert a linguistic term to a linguistic 2-tuple by adding a value 0 as symbolic translation:

Δsi=si,0. (5)

The computational model of 2-tuple linguistic information has been developed, such as 2-tuple comparison operator, 2-tuple negation operator, and a wide range of 2-tuple aggregation operators.

2.2. The Unbalanced Linguistic Representation Model

The unbalanced linguistic representation model was introduced by Herrera [42]. The advantage of this model is that it can manage the linguistic assessment variables which are nonuniformly and nonsymmetrically distributed.

Definition 3 (see [42]). —

If a linguistic term set S has a maximum linguistic term, a minimum linguistic term, and a central linguistic term and other terms are nonuniformly and nonsymmetrically distributed around the central one on both left and right lateral sets, i.e., the different discrimination levels on both sides of central linguistic term, then this type of linguistic term sets is called unbalanced linguistic term sets. An unbalanced linguistic term set S can be noted as S = SLSCSR, in which SC contains the central linguistic term merely and SL contains all left linguistic terms lower than the central one. Similarly, SR contains all right linguistic terms higher than the central one.

Example 4 . —

When experts try to evaluate the “comfort” of a car, the linguistic assessment set is S= {N (none), M (middle), H (high), VH (very high), P (perfect)}, in which SL = {N}, SC = {M}, SR = {H, VH, P}. Obviously, it has the minimum linguistic term N, the maximum linguistic term P, and the central linguistic term M, and the number of terms in the left is 1 which is lower than that in the right (3). In other words, discrimination levels on both sides of central linguistic term are different. So S is an unbalanced linguistic term set (Figure 1).

Figure 1.

Figure 1

Unbalanced linguistic term set.

In order to transmit the unbalanced linguistic terms into linguistic 2-tuple information, the concept of a linguistic hierarchy was defined as follows.

Definition 5 (see [47, 48]). —

A linguistic hierarchy is a set of all levels with each level being a linguistic term set of different granularity. It can be noted as LH = ⋃tl(t, n(t)), where l(t, n(t)) is a level belonging to the linguistic hierarchy, t is a number that indicates the level of the hierarchy, and n(t) is the granularity of the linguistic term set of t. The set FPt = {0, 1/2(n(t) − 1), ⋯, (2n(t) − 1)/2(n(t) − 1), 1} is called former modal points set of the level t. The construction of a LH must satisfy linguistic hierarchy basic rules:

  •   Rule 1: to preserve all former modal points of the membership functions of each linguistic term from one level to the following one.

  •   Rule 2: to make smooth transitions between successive levels. The aim is to add a new linguistic term set in the hierarchy such that a new linguistic term will be added between each pair of terms belonging to the term set of the previous level t.

Example 6 . —

A linguistic hierarchy of level 3 could be given as follows:

LH = l(1, 3) ∪ l(2, 5) ∪ l(3, 9) = {s03, s13, s23}∪{s05, ⋯, s45}∪{s09, ⋯, s89}, where n(1)=3, n(2)=5, n(3)=9; that is, the first level is a linguistic term set of granularity 3, the second level is a linguistic term set of granularity 5, and the third level is a linguistic term set of granularity 9. It can be graphically shown in Figure 2 with the granularity for each linguistic term set of a LH according to the rules in Table 1.

Figure 2.

Figure 2

Linguistic hierarchies of 3, 5, and 9 labels.

Table 1.

Linguistic hierarchies.

Level t=1 t=2 t=3
Granularity n(t)=3 n(t)=5 n(t)=9

Definition 7 (see [49]). —

Let sin(t) be a linguistic term of the level t, then the transformation function from a linguistic level t to another level t′ is defined as follows:

TFttsint,αnt=ΔtΔt1sint,αnt·nt1nt1 (6)

Example 8 . —

Let (s25, 0.3) be a linguistic 2-tuple representation of level 2, and its linguistic 2-tuple representation of level 3 is

TF32s35,0.3=Δ3Δ21s25,0.3·9151=Δ34.6=s59,0.4. (7)

For each label of unbalanced linguistic term set, the semantic representation can be obtained by using linguistic hierarchies. The transformation process is illustrated by the following example.

Example 9 . —

For an unbalanced linguistic term set S = {N, M, H, VH, P}, it can be transformed to 2-tuple representation according to the following steps.

Step 1. Due to SL = {N}, assume #(SL) represents the number of linguistic terms in SL, #(SL) = 1.  ∃n(1) = 3 such that (n(1) − 1)/2 = 1 = #(SL), and SL can be represented by a label of level 1 in LH, i.e., N = s03.

Step 2. Due to SR = {H, VH, P}, suppose #(SR) is the number of linguistic labels in SR, #(SR) = 3. ∃n(2) = 5, n(3) = 9 with (n(2) − 1)/2 = 2 < #(SR) < (n(3) − 1)/2 = 4, and the semantic representation of SR can be got from labels of level 3, 4 in LH. Assume labi is the number of assigned labels of level i in LH; according to proposition 3 in [42], lab2 = (n(3) − 1)/2 − #(SR) = 1 and lab3 = 2 can be obtained; that is, SR can be represented by three labels of level 3 and two labels of level 4; i.e., H = s35, VH = s79, P = s89.

Step 3. To bridge representation gaps defined in [42], SC can be represented by the upside of the central label in level 1 and the downside of level 3, respectively; i.e., M=s13¯s49_.

Step 4. The ultimate representations are as follows:

SL:N=s03;SC:M=s13¯s49_;SR:H=s35,VH=s79,P=s89. (8)

Definition 10 (see [42]). —

The transformation function from an unbalanced linguistic 2-tuple siS to its corresponding linguistic 2-tuple representation skn(t) in LH is a mapping

  • LH : S × [−0.5,0.5 → LH × [−0.5,0.5), such that ∀siS,  ∃skn(t)  LH(si, αi) = (skn(t), αi).

Conversely, we can obtain the linguistic 2-tuple representation from the unbalanced linguistic 2-tuple:

  • LH−1 : LH × [−0.5,0.5) → S × [−0.5,0.5), ∀skn(t)Sn(t), LH−1(skn(t), αi) = (si, λ), λ can be determined by cases as follows:

(1) If si(siS) is represented by merely one label in LH, then LH−1(skn(t), αi) = (si, αi),  λ = αi.

(2) If si(siS) is represented by two labels in LH, then λ = αi or

λ=Δt1sknt,αi·nt+11nt1roundΔt1sknt,αi·nt+11nt1 (9)

(3) If there exists no siS such that si = skn(t), we convert skn(t) to another level; that is, LH−1(skn(t), αi) = LH−1(TFtt(skn(t), αi)), then return to (1) or (2).

Example 11 . —

Continuing Example 9, we have

LHP,0=s1617,0,LH1s59,0.3=AG,0.3,LH1s49,0.2=LH1TF13s49,0.2=LH1s13,0.4=M,0.4. (10)

3. Some Operational Laws for Unbalanced Linguistic 2-Tuple

Based on 2-tuple representation model, we propose some operational laws and properties of unbalanced linguistic 2-tuple.

Definition 12 . —

Let (si, αi) and (sj, αj) be two unbalanced linguistic 2-tuples, λ > 0, then one has

  1. (si, αi) ⊕ (sj, αj) = LH−1(Δ(Δ−1(TFtHti(LH(si, αi))) + Δ−1(TFtHtj(LH(sj, αj)))));

  2. (si, αi) ⊗ (sj, αj) = LH−1(Δ(Δ−1(TFtHti(LH(si, αi))) · Δ−1(TFtHtj(LH(sj, αj)))));

  3. λ · (si, αi) = LH−1(Δ(λ · Δ−1(TFtHti(LH(si, αi)))));

  4. (si, αi)λ = LH−1(Δ(Δ−1(TFtHti(LH(si, αi))))λ).

Theorem 13 . —

Assume that (si, αi) and (sj, αj) are two unbalanced linguistic 2-tuples, λ > 0, then

  1. (si, 0) ⊕ (sj, 0) = (sj, 0) ⊕ (si, 0);

  2. (si, 0) ⊗ (sj, 0) = (sj, 0) ⊗ (si, 0);

  3. λ · ((si, 0) ⊕ (sj, 0)) = (λ⊙(si, 0)) ⊕ (λ⊙(sj, 0));

  4. (λ1 + λ2) · (si, 0) = (λ1 · (si, 0)) ⊕ (λ2 · (si, 0));

  5. ((si, 0) ⊗ (sj, 0))λ = (si, 0)λ ⊗ (sj, 0)λ;

  6. λ1 · λ2 · (si, 0) = (λ1λ2) · (si, 0);

  7. (si, 0)λ1 ⊗ (si, 0)λ2 = (si, 0)λ1+λ2;

  8. ((si, 0)λ1)λ2 = (si, 0)λ1·λ2.

Proof —

(1)

si,0sj,0=LH1ΔΔ1TFtHtiLHsi,0+Δ1TFtHtjLHsj,0=LH1ΔΔ1TFtHtjLHsj,0+Δ1TFtHtiLHsi,0=sj,0si,0; (11)

(2)

si,0sj,0=LH1ΔΔ1TFtHtiLHsi,0·Δ1TFtHtjLHsj,0=LH1ΔΔ1TFtHtjLHsj,0·Δ1TFtHtiLHsi,0=sj,0si,0; (12)

(3)

λsi,0sj,0=LH1Δλ·Δ1TFtHtiLHsi,0+Δ1TFtHtjLHsj,0=LH1Δλ·Δ1TFtHtiLHsi,0+λ·Δ1TFtHtjLHsj,0=λsi,0λsj,0; (13)

(4)

λ1+λ2si,0=LH1Δλ1+λ2·Δ1TFtHtiLHsi,0=LH1Δλ1·Δ1TFtHtiLHsi,0+λ2·Δ1TFtHtiLHsi,0=λ1si,0λ2si,0 (14)

(5)

si,0sj,0λ=LH1ΔΔ1TFtHtiLHsi,0·Δ1TFtHtjLHsj,0λ=LH1ΔΔ1TFtHtiLHsi,0λ·Δ1TFtHtjLHsj,0λ=si,0λsj,0λ; (15)

(6)

λ1λ2si,0=LH1Δλ1·λ2·Δ1TFtHtiLHsi,0=λ1λ2si,0; (16)

(7)

si,0λ1si,0λ2=LH1ΔΔ1TFtHtiLHsi,0λ1·Δ1TFtHtiLHsi,0λ2=LH1ΔΔ1TFtHtiLHsi,0λ1+λ2=si,0λ1+λ2; (17)

(8)

si,0λ1λ2=LH1ΔΔ1TFtHtiLHsi,0λ1λ2=LH1ΔΔ1TFtHtiLHsi,αiλ1·λ2=si,0λ1·λ2. (18)

4. Some Heronian Mean Operators

4.1. The Existing Heronian Mean Operators

The Heronian mean operator has the capacity of capturing the interaction between the input arguments.

Definition 14 (see [15]). —

Let (x1, ⋯, xn) be a collection of nonnegative numbers; the Heronian mean operator is a mapping HM: (0, +)n → (0, +) which satisfies

HMx1,x2,xn=2nn+1i=1nj=inxixj (19)

A series of HM operators are provided, such as the generalized HM (GHM) operator and the generalized geometric HM (GGHM) operator.

Definition 15 (see [15]). —

Let (x1, ⋯, xn) be a collection of nonnegative numbers and p ≥ 0, q ≥ 0, p + q > 0; the generalized Heronian mean operator is a mapping GHM: (0, +)n → (0, +) which satisfies

GHMp,qx1,x2,xn=2nn+1i=1nj=inxipxjq1/p+q (20)

Definition 16 (see [17]). —

Let (x1, ⋯, xn) be a collection of nonnegative numbers and p ≥ 0, q ≥ 0, p + q > 0; the generalized geometric Heronian mean operator is a mapping GGHM: (0, +)n → (0, +) which satisfies

GGHMp,qx1,x2,xn=1p+qi=1nj=inpxi+qxj2/nn+1 (21)

4.2. The Proposed Heronian Mean Operators

The Heronian mean operator can capture the relevance of individual argument. However, it is rarely applied in unbalanced linguistic information. In this section, we shall extend the Heronian mean operator to the situation in which the input arguments are unbalanced linguistic information and shall develop some unbalanced linguistic Heronian mean operators, such as the unbalanced linguistic generalized arithmetic Heronian mean (ULGAHM) operator, the unbalanced linguistic generalized geometry Heronian mean (ULGGHM) operator, the unbalanced linguistic generalized weighted Heronian mean (ULGWHM) operator, and the unbalanced linguistic generalized weighted geometric Heronian mean (ULGWGHM) operator. Moreover, some properties of these operators are investigated; some special cases with respect to the parameter values are discussed simultaneously.

Definition 17 . —

Let p, q ≥ 0, p + q > 0 and (si, 0)(i = 1, ⋯, n) be a collection of unbalanced linguistic 2-tuple variables, then the unbalanced linguistic generalized arithmetic Heronian mean operator of dimension n is a mapping ULGAHM: ΩnΩ, which satisfies

ULGAHMp,qs1,0,,sn,0=LH1Δ2nn+1i=1nj=inΔ1TFtHtiLHsi,0pΔ1TFtHtjLHsj,0q1/p+q (22)

where Ω is the set of all unbalanced linguistic 2-tuple variables and tH is the level of the maximum granularity in LH.

Now, we explore some properties of the ULGAHM operator.

Theorem 18 . —

Let ((s1, 0), ⋯, (sn, 0)) be a collection of unbalanced linguistic 2-tuples and p, q ≥ 0, then the properties of the ULGAHM operator are given as follows:

  • (1)
    Monotonicity: let ((s1, 0), ⋯, (sn, 0)) and ((s1′, 0), ⋯, (sn′, 0)) be two collections of unbalanced linguistic 2-tuples and (si, 0) ≥ (si′, 0) for all i = 1, ⋯, n, then
    ULGAHMp,qs1,0,,sn,0ULGAHMp,qs1,0,,sn,0. (23)
  • (2)
    Idempotency: if (si, 0) = (s, 0) for all i = 1, ⋯, n, then
    ULGAHMp,qs1,0,,sn,0=ULGAHMp,qs,0,,s,0=s,0. (24)
  • (3)
    Boundedness: ULGAHM operator lies between maximum and minimum operator; i.e.,
    mins1,0,,sn,0ULGAHMp,qs1,0,,sn,0maxs1,0,,sn,0. (25)

Proof —

(1) Since (si, 0) ≥ (si′, 0) for all i = 1, ⋯, n, according to the definition of LH and Δ,

we have TFtHti(LH(si, 0)) ≥ TFtHti′  (LH(si′, 0)) for all i = 1, ⋯, n; based on Definition 12,

2nn+1i=1nj=inΔ1TFtHtiLHsi,0p·Δ1TFtHtjLHsj,0q1/p+q2nn+1i=1nj=inΔ1TFtHtiLHsi,0p·Δ1TFtHtjLHsj,0q1/p+q (26)

Thus, ULGAHMp,q((s1, 0), ⋯, (sn, 0)) ≥ ULGAHMp,q((s1′, 0), ⋯, (sn′, 0)).

(2) Since (si, 0) = (s, 0) for all i = 1, ⋯, n, we have

ULGAHMp,qs1,0,,sn,0=LH1Δ2nn+1i=1nj=inΔ1TFtHtiLHs,0pΔ1TFtHtiLHs,0q1/p+q=LH1ΔΔ1TFtHtiLHs,0=s,0. (27)

(3) Let (s, 0) = min⁡((s1, 0), ⋯, (sn, 0)), (s, 0) = max⁡((s1, 0), ⋯, (sn, 0)); according to the property of idempotency, we have ULGAHMp,q((s, 0), ⋯, (s, 0)) = (s, 0), ULGAHMp,q((s, 0), ⋯, (s, 0)) = (s, 0), since (s, 0) ≤ (si, 0) ≤ (s, 0) for all i = 1, ⋯, n.

Thus, (s, 0) = ULGAHMp,q((s, 0), ⋯, (s, 0)) ≤ ULGAHMp,q((s1, 0), ⋯, (sn, 0)) ≤ ULGAHMp,q((s, 0), ⋯, (s, 0)) = (s, 0).

It is easy to see that the unbalanced linguistic generalized arithmetic Heronian mean operator does not satisfy the property of commutativity.

We can get a series of special cases by assigning different values to the parameters p and q of the ULGAHM operator.

  • (1)
    If q → 0, we get
    ULGAHMp,0s1,0,,sn,0=LH1Δ2nn+1i=1nj=inΔ1TFtHtiLHsi,0pΔ1TFtHtjLHsj,0q1/p+q=LH1Δ2nn+1i=1nj=inΔ1TFtHtiLHsi,0p1/p=LH1Δi=1n2ni+1nn+1Δ1TFtHtiLHsi,0p1/p=i=1n2ni+1nn+1si,0p1/p, (28)
  • which is called the unbalanced linguistic generalized weighted mean (ULGWM) operator with the descending weight vector (2/(n + 1), ⋯,2/n(n + 1))T.

  • (2)
    If p = 1, q → 0, we have
    ULGAHM1,0s1,0,,sn,0=LH1Δ2nn+1i=1nj=inΔ1TFtHtiLHsi,0Δ1TFtHtjLHsj,0q1/q=LH1Δ2nn+1i=1nj=inΔ1TFtHtiLHsi,0=LH1Δi=1n2ni+1nn+1Δ1TFtHtiLHsi,0=i=1n2ni+1nn+1si,0. (29)
  • The ULGAHM operator reduces to the unbalanced linguistic weighted mean (ULWM) operator.

  • (3)
    If p = 2, q → 0, we obtain
    ULGAHM2,0s1,0,,sn,0=LH1Δ2nn+1i=1nj=1nΔ1TFtHtiLHsi,02Δ1TFtHtjLHsj,0q1/2+q=LH1Δ2nn+1i=1nj=inΔ1TFtHtiLHsi,021/2=LH1Δi=1n2ni+1nn+1Δ1TFtHtiLHsi,021/2=i=1n2ni+1nn+1si,021/2, (30)
  • which is called the unbalanced linguistic weighted square mean (ULWSM) operator.

  • (4)
    If p → 0, we have
    ULGAHM0,qs1,0,,sn,0=LH1Δ2nn+1i=1nj=inΔ1TFtHtiLHsi,0pΔ1TFtHtjLHsj,0q1/p+q=LH1Δ2nn+1i=1nj=inΔ1TFtHtjLHsj,0q1/q=LH1Δ2nn+1j=1njΔ1TFtHtjLHsj,0q1/q=j=1n2jnn+1·sj,0q1/q. (31)
  • Obviously, the ULGAHM operator reduces to the unbalanced linguistic generalized weighted mean (ULGWM) operator with the ascending weight vector (2/n(n + 1), ⋯,2/(n + 1))T.

  • (5)
    If p = q = 1, we obtain
    ULGAHM1,1s1,0,,sn,0=LH1Δ2nn+1i=1nj=inΔ1TFtHtiLHsi,0pΔ1TFtHtjLHsj,0q1/p+q=LH1Δ2nn+1i=1nj=inΔ1TFtHtiLHsi,0Δ1TFtHtjLHsj,01/2=i=1nj=in2nn+1si,0sj,01/2. (32)
  • (6)
    If p = q = 1/2, we get
    ULGAHM1/2,1/2s1,0,,sn,0=LH1Δ2nn+1i=1nj=inΔ1TFtHtiLHsi,0pΔ1TFtHtjLHsj,0q1/p+q=LH1Δ2nn+1i=1nj=inΔ1TFtHtiLHsi,01/2Δ1TFtHtjLHsj,01/2=i=1nj=in2nn+1si,01/2sj,01/2, (33)
  • which we call general unbalanced linguistic Heronian mean (ULHM) operator in this case.

We introduce the concept of the unbalanced linguistic generalized geometric Heronian mean operator as follows.

Definition 19 . —

Let p, q ≥ 0,  p + q > 0, and (si, 0)(i = 1, ⋯, n) be a collection of unbalanced linguistic 2-tuples, then the unbalanced linguistic generalized geometric Heronian mean operator of dimension n is a mapping ULGGHM:  ΩnΩ, which satisfies

ULGGHMp,qs1,0,,sn,0=LH1Δ1p+qi=1nj=inpΔ1TFtHtiLHsi,0+qΔ1TFtHtjLHsj,02/nn+1 (34)

where Ω is the set of all the unbalanced linguistic 2-tuples and tH is a level of the maximum granularity in LH.

Some properties of the unbalanced linguistic generalized geometric Heronian mean operator are investigated as follows.

Theorem 20 . —

Let ((s1, 0), ⋯, (sn, 0)) be a collection of unbalanced linguistic 2-tuples and p, q ≥ 0, then the properties of the ULGGHM operator are given as follows:

  • (1)
    Monotonicity: let ((s1, 0), ⋯, (sn, 0)) and ((s1′, 0), ⋯, (sn′, 0)) be two collections of unbalanced 2-tuple linguistic variables and (si, 0) ≥ (si′, 0) for all i = 1, ⋯, n, then
    ULGGHMp,qs1,0,,sn,0ULGGHMp,qs1,0,,sn,0. (35)
  • (2)
    Idempotency: if (si, 0) = (s, 0) for all i = 1, ⋯, n, then
    ULGGHMp,qs1,0,,sn,0=ULGGHMp,qs,0,,s,0=s,0. (36)
  • (3)

    Boundedness: let (s, 0) = min⁡((s1, 0), ⋯, (sn, 0)), (s, 0) = max⁡((s1, 0), ⋯, (sn, 0)), then (s, 0) ≤ ULGGHMp,q((s1, 0), ⋯, (sn, 0)) ≤ (s, 0).

Proof —

The proof of Theorem 20 can be seen in the Appendix.

Similarly, the unbalanced linguistic generalized geometric Heronian mean operator does not satisfy the property of commutativity.

Next, we analyze some particular cases in regard to parameters p and q.

  • (1)
    If q → 0, then
    ULGGHMp,0s1,0,,sn,0=LH1Δ1p+qi=1nj=inpΔ1TFtHtiLHsi,0+qΔ1TFtHtjLHsj,02/nn+1=LH1Δj=1nΔ1TFtHtiLHsi,02n+i1/nn+1=i=1nsi,02ni+1/nn+1, (37)
  • which we call the unbalanced linguistic geometric mean (ULGM) operator with the descending weight vector. It has no relationship with p while q → 0.

  • (2)
    If p → 0, then
    ULGGHM0,qs1,0,,sn,0=LH1Δ1p+qi=1nj=inpΔ1TFtHtiLHsi,0+qΔ1TFtHtjLHsj,02/nn+1=LH1Δj=1nΔ1TFtHtjLHsj,02j/nn+1=j=1nsj,02j/nn+1. (38)
  • The ULGGHM operator reduces to the unbalanced linguistic geometric mean (ULGM) operator with the ascending weight vector. It has no relationship with q while p → 0.

  • (3)
    If p = q = 1, then
    ULGGHM1,1s1,0,,sn,0=LH1Δ1p+qi=1nj=inpΔ1TFtHtiLHsi,0+qΔ1TFtHtjLHsj,02/nn+1=LH1Δ12i=1nj=inΔ1TFtHtiLHsi,0+Δ1TFtHtjLHsj,02/nn+1=i=1nj=in12si,0sj,02/nn+1. (39)
  • (4)
    If p = q = 1/2, then
    ULGGHM1/2,1/2s1,0,,sn,0=LH1Δ1p+qi=1nj=inpΔ1TFtHtiLHsi,0+qΔ1TFtHtjLHsj,02/nn+1=LH1Δi=1nj=in12Δ1TFtHtiLHsi,0+12Δ1TFtHtjLHsj,02/nn+1=i=1nj=in12si,012sj,02/nn+1, (40)
  • which we call general unbalanced linguistic geometric Heronian mean (ULHM) operator in this case.

In (22) and (34), all aggregated arguments have the same importance. However, different parameters have different importance because of the different attitudes of decision-makers. Considering the importance of each argument, so we introduce the unbalanced linguistic generalized weighted arithmetic Heronian mean (ULGWAHM) operator and the unbalanced linguistic generalized weighted geometric Heronian mean (ULGWGHM) operator as follows.

Definition 21 . —

Let p, q ≥ 0, p + q > 0, and (si, 0)(i = 1, ⋯, n) be a collection of unbalanced linguistic 2-tuples, then the unbalanced linguistic generalized weighted arithmetic Heronian mean operator of dimension n is a mapping ULGWAHM: ΩnΩ, which satisfies

ULGWAHMp,qs1,0,,sn,0=LH1Δi=1nj=inwiΔ1TFtHtiLHsi,0pwjΔ1TFtHtjLHsj,0q1/p+qi=1nj=inwipwjq1/p+q (41)

where Ω is the set of all the unbalanced linguistic 2-tuples, tH is a level of the maximum granularity in LH, and W = (w1, ,⋯,wn)T is the weight vector of (si, 0)(i = 1, ⋯, n), satisfying wi ≥ 0, ∑i=1nwi = 1.

Now, we discuss some properties of the unbalanced linguistic generalized weighted arithmetic Heronian mean operator.

Theorem 22 . —

Assume that ((s1, 0), ⋯, (sn, 0)) is a collection of unbalanced linguistic 2-tuples and p, q ≥ 0, then the properties of the unbalanced linguistic generalized weighted arithmetic Heronian mean operator are given as follows:

  • (1)
    Reducibility: let w1 = ⋯ = wn = 1/n, then
    ULGWAHMp,qs1,0,,sn,0=ULGAHMp,qs1,0,,sn,0. (42)
  • (2)
    Monotonicity: let ((s1, 0), ⋯, (sn, 0)) and ((s1′, 0), ⋯, (sn′, 0)) be two collections of unbalanced linguistic 2-tuples and (si, 0) ≥ (si′, 0) for all i = 1, ⋯, n, then
    ULGWAHMp,qs1,0,,sn,0ULGWAHMp,qs1,0,,sn,0. (43)
  • (3)
    Idempotency: if (si, 0) = (s, 0) for all i = 1, ⋯, n, then
    ULGWAHMp,qs1,0,,sn,0=ULGWAHMp,qs,0,,s,0=s,0. (44)
  • (4)
    Boundedness: let (s, 0) = min⁡((s1, 0), ⋯, (sn, 0)), (s, 0) = max⁡((s1, 0), ⋯, (sn, 0)), then
    s,0ULGWAHMp,qs1,0,,sn,0s,0. (45)

It is easy to see that the unbalanced linguistic generalized weighted arithmetic Heronian mean operator does not satisfy the property of commutativity.

Proof —

The proof of Theorem 22 and some special cases of the unbalanced linguistic generalized weighted arithmetic Heronian mean operator in regard to parameters p and q can be seen in the Appendix.

Considering the importance of input arguments and unbalanced linguistic generalized geometric Heronian mean operator, we further introduce the unbalanced linguistic generalized weighted geometric Heronian mean (ULGWGHM) operator.

Definition 23 . —

Let p, q ≥ 0,  p + q > 0, and (si, 0)(i = 1, ⋯, n) be a collection of unbalanced linguistic 2-tuples, then the unbalanced linguistic generalized weighted geometric Heronian mean operator of dimension n is a mapping ULGWGHM: ΩnΩ, which satisfies

ULGWGHMp,qs1,0,,sn,0=LH1Δ1p+qi=1nj=inp·Δ1TFtHtisi,0+q·Δ1TFtHtjsj,02ni+1/nn+1·wj/k=inwk (46)

where Ω is the set of all the unbalanced linguistic 2-tuples, tH is a level of LH which has the maximum granularity, and W = (w1, ,⋯,wn)T is the weight vector of (si, 0)(i = 1, ⋯, n) satisfying wi ≥ 0, ∑i=1nwi = 1.

Some properties of the unbalanced linguistic generalized weighted geometric Heronian mean operator are investigated as follows.

Theorem 24 . —

Suppose that ((s1, 0), ⋯, (sn, 0)) is a collection of unbalanced linguistic 2-tuples and p, q ≥ 0, then the properties of the ULGWGHM operator are given as follows:

  • (1)
    Reducibility: let w1 = ⋯ = wn = 1/n, then
    ULGWGHMp,qs1,0,,sn,0=ULGGHMp,qs1,0,,sn,0. (47)
  • (2)
    Monotonicity: let ((s1, 0), ⋯, (sn, 0)) and ((s1′, 0), ⋯, (sn′, 0)) be two collections of unbalanced linguistic 2-tuples and (si, 0) ≥ (si′, 0) for all i = 1, ⋯, n, then
    ULGWGHMp,qs1,0,,sn,0ULGWGHMp,qs1,0,,sn,0. (48)
  • (3)
    Idempotency: if (si, 0) = (s, 0) for all i = 1, ⋯, n, then
    ULGWGHMp,qs1,0,,sn,0=ULGWGHMp,qs,0,,s,0=s,0. (49)
  • (4)
    Boundedness: let (s, 0) = min⁡((s1, 0), ⋯, (sn, 0)), (s, 0) = max⁡((s1, 0), ⋯, (sn, 0)), then
    s,0ULGWGHMp,qs1,0,,sn,0s,0. (50)

Proof —

The proof of Theorem 24 is similar to that of Theorem 22, so it is omitted.

Similarly, the unbalanced linguistic generalized weighted geometric Heronian mean operator does not satisfy the property of commutativity. Some special cases of the unbalanced linguistic generalized weighted geometric Heronian mean operator in regard to parameters p and q can be seen in the Appendix.

5. MAGDM Method Based on Unbalanced Linguistic Information

In this section, we will develop the process of solving MADM problem by using the new proposed unbalanced linguistic Heronian mean operators and entropy measure method to solve MADM problems, where the weights of attributes are unknown, and the assessment values are unbalanced linguistic terms.

5.1. Problem Description

A MADM problem can be described as a quadruple 〈X, E, D, S〉, where

X = {x1, ⋯, xm} is a set of all possible alternatives for decision-makers and m ≥ 2;

E = {e1, ⋯, en} is a set of attributes for each alternative, and the attributes are supposed to be correlative in this paper;

D = {d1, ⋯, dt} is a set of decision-makers;

S (k) = (sij(k))m×n is the decision matrix provided by the kth decision-maker dk, and sij(k) represents the preference value of xi with respect to attributes ej, sij(k)taking the form of unbalanced linguistic terms.

5.2. Entropy Method to Determine the Attribute Weights

An important step of the MADM problem is the determination of the attribute weights. We first introduce the concept of entropy for unbalanced linguistic term sets, then an optimization model is constructed to determine the weights of attributes.

Definition 25 . —

Let S = {s1, ⋯, sg} be an unbalanced linguistic term set, X = {x1, ⋯, xn} be a finite set, and a mapping LU : XS, then a pair (X, LU) is called unbalanced linguistic fuzzy set, and the value LU(x) is said to be the grade of unbalanced linguistic membership of x in (X, LU).

In particular, if LU(xi) is the maximum label or the minimum label, i.e., LC = {LU(xi) = s1  or  sg, xiX, i = 1, ⋯, n}, then the unbalanced linguistic fuzzy set will reduce to a crisp set.

If all unbalanced linguistic fuzzy sets are denoted as LUFSS(X), then the concept of entropy for unbalanced linguistic information can be developed as follows.

Definition 26 . —

Let X = {x1, ⋯, xn} be a finite set, LU1 = {L(xi) = sαi, xiX} and LU2 = {L(xi) = sβi, xiX} be two unbalanced linguistic term sets defined in X, and LUx¯ be the negation of LU(x); E(LU(x)) is said to be an entropy measure for unbalanced linguistic term set if the following properties are valid:

  1. 0 ≤ E(LU(x)) ≤ 1;

  2. E(LU(x)) = 0 if and only if LU(x) is a crisp set in X, i.e., LU(x) = {LU(xi) = s1  or  sg, xiX};

  3. E(LU(x)) is a unique maximum if si is the central label sc;

  4. LU1LU2 if sβisαi for sαisc or if sβisαi for sαisc;

  5. ELUx¯=ELUx.

According to Definition 26, we can construct the following formula as the entropy measure of the unbalanced linguistic information:

ELUx=41ni=1nΔ1TFtHtiLHsi,0gtH·1Δ1TFtHtiLHsi,0gtH (51)

The entropy of unbalanced linguistic information under each attribute could be calculated according to (51), which is denoted as E(LU(xj)), j = 1, ⋯, n.

The ideal of entropy method is that if the entropy among all attributes for an alternative is larger, then the effective information is less; thus the attributes should be assigned less weight; otherwise, it should be assigned a greater weight.

If the information about weight wj of the attribute ej(j = 1, ⋯, n) is completely unknown, we can build the following equations to determine the attribute weight:

wj=1ELUxjj=1n1ELUxj,j=1,,n (52)

Sometimes, the decision-makers only offer partly known information about attribute weights; let Φ be the set of the partly known information about attribute weights. In order to get an objective attribute weight vector, the optimal model based on maximum entropy principle could be constructed:

maxEw=λj=1nwjELUxj+1λj=1nwjlnwjs.t.w1,,wnΦ0wj1, (53)

where λ ∈ [0,1] is an attribute parameter representing the attitude of the decision-makers toward one of the objectives.

5.3. The Decision-Making Procedure

To obtain the best opinion, the new approach based on the new unbalanced linguistic Heronian mean operator is presented to solve the MADM problem. The new approach involved the following steps.

Step 1 . —

Calculate the semantic representation about the unbalanced linguistic variables.

The semantic representation can be calculated by

LH:S×0.5,0.5LH×0.5,0.5. (54)

Step 2 . —

Translate the unbalanced linguistic variable sij to a linguistic 2-tuple variable.

A linguistic 2-tuple variable can be obtained as (rij, αij), where rij = Δ(round−1(LH(sij)))),  αij = Δ−1(LH(sij)) − round−1(LH(sij))).

Step 3 . —

Calculate the entropy of unbalanced linguistic values under all attributes.

The entropy of unbalanced linguistic values under ith attribute can be calculated by

ELUxj=41nj=1nΔ1TFtHtiLHsj,0gtH·1Δ1TFtHtiLHsj,0gtH. (55)

Step 4 . —

Generate the attribute weight vector W = {w1, ⋯, wn}.

Once the linguistic 2-tuple variables and entropy are obtained, the attribute weights can be generated by the optimal model:

maxEw=λj=1nwj1ELUxj+1λj=1nwjlnwjs.t.w1,,wnΦ0wj1. (56)

Step 5 . —

Output the comprehensive assessment values for each alternative.

Based on the weights obtained in Step 4, utilize the ULGWAHM operator or ULGWGHM operator to obtain the comprehensive evaluation values in terms of unbalanced linguistic term for each alternative

γi,αi=ULGWAHMγi1,αi1,,γin,αin; (57)

or

γi,αi=ULGWGHMγi1,αi1,,γin,αin. (58)

Step 6 . —

Rank all alternatives.

Compare the overall values of all alternatives according to unbalanced linguistic comparison laws and select the best one.

Step 7 . —

End.

6. Illustrative Example

In this section, we employ an unbalanced linguistic selection model of low-carbon tourism (LCTD) (adapted from [50]) by applying the proposed MADM method and give an example to demonstrate its effectiveness and validity.

6.1. Background

Over the past several decades, economic development has been recognized as the only approach to improve quality of life and social status in communities and cities of different areas, especially in developing countries. Along with rapid social and economic development, problems of carbon emissions are getting serious and have been of critical concern to both national and local governments worldwide for many decades. Increasing numbers of carbon emissions issues (such as global warming, air pollution, sea levels rising, glaciers melting, and Nino phenomenon) can lead to a variety of impacts on and liabilities in public health and sustainable regional development. As a significant part of economic development, the tourism industry is encouraging low-carbon tourism and developing low-carbon tourism destinations (LCTDs). Low-carbon tourism is a “green” form of tourism that is based on the goals of low-energy consumption, low pollution, and low emissions. Therefore, it is important for tourists to select the best option(s) from multiple low-carbon tourism destinations based on multiple attributes while considering carbon reduction, lower energy consumption, and environmental protection because of their ability to protect environment and public health.

Next, we would like to employ an illustrative example to provide certain reference attributes for unbalanced linguistic selection of LCTDs by applying the proposed MAGDM approach.

6.2. Case Study

6.2.1. The Establishment of Assessment Systems for LCTDs

In order to mitigate the damage of carbon emissions and save energy, many low-carbon tourism destinations have been developed. Moreover, many tourists have recognized the importance of low-carbon tourism for environmental protection. In order to find a good balance between the enjoyment of a trip and carbon emission reduction, it is crucial for tourists to compare and evaluate some known low-carbon tourism destinations and then choose the best one(s) from these options. Generally speaking, this evaluation and selection process is based on several criteria or attributes. In this case study, the attributes consist of the following four aspects:

  • e1: low-carbon transportation, low-energy consumption vehicles, and pick-up and drop-off services as reflected in connecting different scenic sites and reaching the destination.

  • e2: hotels and accommodation, as reflected in green-material labels, low-carbon facilities, and a low-carbon environment and education management. Food service including green food, a low-carbon environment, and low-energy waste handling mechanisms.

  • e3: consumption satisfaction, as reflected in the service cost of travel agencies, ticket prices for scenic sites, and the cost of accommodation.

  • e4: attraction and impact of scenic sites, including low-carbon customer service and low-carbon management and control.

In order to select the best LCTD, a tourist (i.e., decision-makers) wants to go on a low-carbon trip. After preliminary screening, there are four low-carbon tourism destinations as the set of alternatives. Therefore, in this case study, the tourist is empowered to provide preferences in terms of several unbalanced linguistic terms on the response alternatives xi(i = 1, ⋯, 4) under the four attributes ej(j = 1, ⋯, 4). Assume that the four alternatives are to be evaluated using the following unbalanced linguistic term set S= {N (none), L (low), M (medium), AH (almost high), H (high), QH (quite high), VH (very high), AT (almost total), T (total)}, the density is extreme, tH = 4, and a linguistic hierarchy is LH = l(1, 3) ∪ l(2, 5) ∪ l(3, 9) ∪ l(4,17) = {s03, s13, s23} ∪ {s05, ⋯, s45}∪{s09, ⋯, s89}∪{s017, ⋯, s1617}.

To obtain the most preferred LCTD, we present the effective MADM approach for the problem, where attribute weights are partly unknown due to the problem complexity. Then the performance unbalanced linguistic assessments for each alternative xi(i = 1, ⋯, 4) are listed in Table 2. According to the approach developed in Section 5 and the given parameters, we can rank the order of alternatives by applying the MATLAB software package. The concrete steps are shown as follows.

Table 2.

Decision matrix with unbalanced linguistic information.

e 1 e 2 e 3 e 4
X1 L AH H H
X2 AH H QH L
X3 AH L AH M
X4 M H AH M

6.2.2. Procedure of MAGDM Problem Based on Unbalanced Linguistic Heronian Mean Operators

We adopt the proposed method to rank the alternatives in the example and select the best one. The decision steps are as follows.

Step 1 . —

Calculate the semantic representations which are in form of unbalanced linguistic terms and they are shown as

N=s05,L=s15,M=s25,AH=s59,H=s69,QH=s1317,VH=s1417,AT=s1517,T=s1617. (59)
Step 2 . —

Translate the unbalanced linguistic variable sij(k) to the linguistic 2-tuple variables which are expressed as

N,0=s05,0,L,0=s15,0,M,0=s25,0,AH,0=s59,0,H,0=s69,0,QH,0=s1317,0,VH,0=s1417,0,AT,0=s1517,0,T,0=s1617,0. (60)
Step 3 . —

Calculate the entropy of unbalanced linguistic values under each attribute.

Utilizing (52), the entropy of each attribute can be derived as follows:

E1LUxj=0.906,E2LUxj=0.797E3LUxj=0.809,E4LUxj=0.914. (61)
Step 4 . —

Generate the attribute weight vector.

Utilizing the optimal model (53) and the Lingo 11.0 software package, the attribute weights can be derived with λ = 1/2 as follows:

w1=0.2648,w2=0.2793,w3=0.1943,w4=0.2615. (62)
Step 5 . —

Output the comprehensive assessment values for each alternative.

Utilizing the ULGWAHM operator with the parameters p = q = 1, we can get the overall collective preference value (γi, αi) of the alternative xi

γ1,α1=AH,0.2941,γ2,α2=AH,0.16355,γ3,α3=M,0037775,γ4,α4=AH,0.2283. (63)
Step 6 . —

We can get x2x4x1x3; therefore the alternative x2 is the best choice.

6.3. Sensitivity Analysis of the Parameters in Unbalanced Linguistic Heronian Mean Aggregation Operators

6.3.1. Sensitivity Analysis of the Parameters p or q

In order to illustrate the impact of the parameters p and q on aggregation results, we fixed one of p and q, and different rankings of alternatives and decision-making based on ULGWAHM operator can be obtained which were shown in Figures 3 and 4.

Figure 3.

Figure 3

Comprehensive value obtained by ULGWAHM (p = 0, q ∈ (0,8]).

Figure 4.

Figure 4

Comprehensive value obtained by ULGWAHM (q = 0, p ∈ (0,8]).

We can find from Figure 3 that

  1. when q ∈ (0,0.6960], we have x1x2x4x3 and the best alternative is x1;

  2. when q ∈ (2.6960,8], we have x1x4x2x3 and the best alternative is x1.

  3. We can find from Figure 4 that

  4. when p ∈ (0,0.35], we have x2x4x3x1 and the optimal alternative is x2;

  5. when p ∈ (0.35,8], we have x2x4x1x3 and the optimal alternative is x2.

From Figures 3 and 4, we can see that the larger the value of p or q is, the larger the aggregated value is. Therefore, proper selection can be made according to the attitude of the decision-makers. For instance, in practical decision problems, the decision-maker who is pessimistic can choose the smaller values of the parameters p and q while the optimistic one can choose the bigger values of the parameters p and q.

6.3.2. Sensitivity Analysis of the Parameters p and q

If we let the parameters p and q change simultaneously, the associated aggregation results of each alternative could be obtained which are shown in Figures 58.

Figure 5.

Figure 5

Comprehensive values for alternative x1 obtained by ULGWAHM operator (p ∈ [0, 10], q ∈ [0,10]).

Figure 6.

Figure 6

Comprehensive values for alternative x2 obtained by ULGWAHM operator (p ∈ [0, 10], q ∈ [0,10]).

Figure 7.

Figure 7

Comprehensive values for alternative x3 obtained by ULGWAHM operator (p ∈ [0, 10], q ∈ [0,10]).

Figure 8.

Figure 8

Comprehensive values for alternative x4 obtained by ULGWAHM operator (p ∈ [0, 10], q ∈ [0,10]).

From Figures 58, we can find that the interaction of arguments becomes stronger as values of parameters p and q increase. Therefore, the manager can choose suitable values p and q to determine the optimal alternative based on the practical need and his/her preference.

If we use ULGWGHM operator replacing ULGWAHM operator in the above investment, we could obtain the overall collective preference value of each alternative with p=q=1 shown in Table 3 and the aggregation values of each alternative as the values p and q changed simultaneously in Figures 912. Obviously, in most cases, the values by ULGWGHM operator are smaller than that of ULGWAHM one for the same aggregation arguments which denote the former is pessimistic while the latter is optimistic. Thus, the decision-maker can suitably select the best alternative according to his/her preference to meet real need. It is of crucial importance in practical decision-making.

Table 3.

The collective overall preference values obtained by the proposed method.

x 1 x 2 x 3 x 4
ULGWAHM LH−1(Δ(9.4118))
(AH,−0.2941)
LH−1(Δ(9.6729))
(AH,−0.16355)
LH−1Δ(7.8489))
(M,−0.037775)
LH−1(Δ(9.5434)
(AH,−0.2283)
Ranking x 2x4x1x3
ULGWGHM LH−1(Δ(7.3211))
(M,−0.169725)
LH−1(Δ(10.1566))
(AH,0.1566)
LH−1Δ(7.4289))
(M,−0.142775)
LH−1(Δ(9.4471))
(AH,−0.27645)
Ranking x 2x4x3x1
Figure 9.

Figure 9

Comprehensive values for alternative x1 obtained by ULGWGHM operator (p ∈ [0, 10], q ∈ [0,10]).

Figure 10.

Figure 10

Comprehensive values for alternative x2 obtained by ULGWGHM operator (p ∈ [0, 10], q ∈ [0,10]).

Figure 11.

Figure 11

Comprehensive values for alternative x3 obtained by ULGWGHM operator (p ∈ [0, 10], q ∈ [0,10]).

Figure 12.

Figure 12

Comprehensive values for alternative x4 obtained by ULGWGHM operator (p ∈ [0, 10], q ∈ [0,10]).

From the above analysis, we can choose appropriate value of parameters p, q and the suitable operator to meet the various actual requirements. Consequently, it is more feasible and flexible for decision-making problems.

7. Comparison Analyses of the Results Obtained

In this section, a set of comparative studies was conducted with the relevant frequently used aggregation approach and classical decision-making method to demonstrate the feasibility and applicability of the proposed unbalanced linguistic MADM method of this paper.

7.1. Comparison with the Existing Linguistic Aggregation Operators

Firstly, we compare our methods with previous 2-tuple linguistic aggregation operators including the dependent 2-tuple ordered weighted average (D2TOWA) operator and the dependent 2-tuple ordered weight geometric (D2TOWG) operator [31], and the 2-tuple weighted Bonferroni mean (2TWBM) operator and the 2-tuple weighted geometric Bonferroni mean (2TWGBM) operator [51].

Wei [31] proposed the concepts of series dependent 2-tuple (D2TL) aggregation operators and a linguistic MADM problem with 2-tuple linguistic information. In order to use the D2TOWA, D2TOWG, 2TWBM, and 2TWGBM operators, the evaluation values of this paper should be transformed into 2-tuple linguistic information which are as follows:

Ns05,0,Ls15,0,Ms25,0,AHs59,0,Hs69,0,QHs1317,0,VHs1417,0,ATs1517,0,Ts1617,0. (64)

After calculating the dependent weights w11 = 0.3182, w12 = 0.2576, w13 = 0.2576, w14 = 0.1667, w21 = 0.3261, w22 = 0.2681, w23 = 0.2391, w24 = 0.1667, w31 = 0.3333, w32 = 0.2500, w33 = 0.2500, w34 = 0.1667, w41 = 0.3056, w42 = 0.2500, w43 = 0.2500, w44 = 0.1944, the overall collective preference can be obtained. The comparison is shown in Table 4 (p=q=1). The aggregation results of four alternatives by ULWBM operator as the parameters p and q changed simultaneously in Figures 1316.

Table 4.

Comparison with the existing linguistic operators.

Aggregation operators Linguistic distribution Parameter number Order of alternative
D2TOWA operator Balanced One x 2x1x4x3
D2TOWG operator Balanced One x 2x1x4x3
2TWBM operator Balanced Two x 2x1x4x3
2TWGBM operator Balanced Two x 2x1x4x3
ULGWAHM operator Unbalanced Two x 2x4x1x3
ULGWGHM operator Unbalanced Two x 2x4x3x1

Figure 13.

Figure 13

Comprehensive values for alternative x1 obtained by ULWBM operator (p ∈ [0, 10], q ∈ [0,10]).

Figure 14.

Figure 14

Comprehensive values for alternative x2 obtained by ULWBM operator (p ∈ [0, 10], q ∈ [0,10]).

Figure 15.

Figure 15

Comprehensive values for alternative x3 obtained by ULWBM operator (p ∈ [0, 10], q ∈ [0,10]).

Figure 16.

Figure 16

Comprehensive values for alternative x4 obtained by ULWBM operator (p ∈ [0, 10], q ∈ [0,10]).

Compared with the existing 2-tuple linguistic aggregation operators, our proposed approaches have the following advantages:

(1) The approach in this paper considers the interactions between not only criteria values ei and ej  (i < j) but also between ei and itself, while the method of [31] cannot process and [51] ignores the correlation between ei and itself. Thus, the method in this paper is more effective than the others.

(2) The proposed method of this paper has flexible parameters p and q. We can choose the appropriate values of parameters to satisfy the real demand. But the method in [31] has no selectable parameters. Thus, the proposed method is more flexible.

(3) The method in [31, 51] can only deal with the case where the input arguments are the form of 2-tuple whereas ours is suitable for three cases: linguistic variables, 2-tuple, and unbalanced linguistic information which indicates that ours is more universal.

7.2. The TOPSIS Method for Unbalanced Linguistic MAMD

In the following, we will put emphasis on the classical TOPSIS method. The basic principle of the TOPSIS method is that the optimal alternative should have the farthest distance from the negative ideal solution and the closest distance from the positive ideal solution simultaneously. The steps are involved by using the TOPSIS method.

Step 1 . —

We can obtain the 2-tuple linguistic representation shown in Section 6.2.2.

Step 2 . —

Define the unbalanced linguistic positive ideal solution (ULPIS) and the negative ideal solution (ULNIS). Since the unbalanced linguistic term set is S= {N (none), L (low), M (medium), AH (almost high), H (high), QH (quite high), VH (very high), AT (almost total), T (total)}, thus the ULPIS and ULNIS are r = N and r+ = T, respectively.

Step 3 . —

Calculate the distance from each evaluation value to ULPIS and ULNIS using the following equation:

di+=j=1nwjdrij,r+,di=j=1nwjdrij,r (65)

where the separation between alternatives is the Hamming distance; i.e., d(rij, r) = |Δ−1(TFtht(LH(rij))) − Δ−1(TFtht(LH(r)))|, then we can get di+ and di. It is obvious that the larger di and the smaller di+, the better the alternative.

Step 4 . —

Calculate the closeness coefficient to ideal solution as

CCi=didi++di,i=1,2,3,4 (66)

The closeness coefficient to ideal solution for the alternative xi can be obtained as

CC1=d1+d1++d1=0.5442,CC2=d2d2++d2=0.6469,CC3=d3d3++d3=0.4876,CC4=d4d4++d4=0.5941 (67)

According to the closeness coefficient, we can determine the ranking of all alternatives as x2x4x1x3, and the best alternative is x2.

Obviously, the ranking of alternatives obtained by the unbalanced linguistic TOPSIS method is identical to that by the ULGWAHM aggregation operator, which states the validity of the proposed method in this paper. Thus, response solution x2 is the most appropriate one.

According to the comparison that focuses on different angles, we find the result based on the ULGWAHM operator is identical to other operators and TOPSIS methods. In fact, these methods have their own advantages and disadvantages correspondingly. In summary, the ULGAHM model proposed in this paper has the following characteristics:

(1) The proposed method of this paper is suitable for series linguistic information, including linguistic variables, 2-tuple, and unbalanced linguistic information which is universal.

(2) The approach in this paper considers the correlation of all attributes; simultaneously it has flexible parameters to satisfy the complex decision-making problems. Thus, the proposed method is flexible.

(3) We propose a model to deal with the situation where the weights information is unknown. The proposed model for optimal weight vector is advantaged and effective, which takes objective weights information into consideration.

In summary, the developed method would be more suitable to handle indeterminate information and unbalanced information in complex decision-making problems. Therefore, it is more reasonable than existing methods.

8. Conclusions

This paper focuses on MADM problem with unbalanced linguistic information, which introduced some new unbalanced linguistic Heronian mean aggregation functions by using unbalanced linguistic information and Heronian mean operator. First, we have presented the ULGAHM operator and the ULGGHM operator. Then, the ULGWAHM operator and ULGWGHM operator have been proposed in consideration of different importance of attributes. These operators are very helpful in situations where the assessed information can not be expressed with real number but with unbalanced linguistic information. Some main properties and particular cases of the operators have been studied. We have applied the new method for investment projects and made the selection based on the new aggregation operators. It is easy to find that the results are the identical one with the special ULGWAHM operator and ULGWGHM operator.

In the future, we expect to extend unbalanced linguistic Heronian mean operator to other situations, such as interval linguistic information, intuitionistic fuzzy linguistic environment, and more complicated situation, and consider other applications.

Acknowledgments

The work was supported by Natural Science Foundation of China (71371011, 71301001, 71771001, 71701001, 71501002, and 61502003), Provincial Natural Science Research Project of Anhui Colleges (KJ2017A026), and The Doctoral Scientific Research Foundation of Anhui University.

Appendix

A. The Proof of Theorem 20

Proof —

(1) Since (si, 0) ≥ (si′, 0) for all i = 1, ⋯, n, then TFtHti(LH(si, 0)) ≥ TFtHti(LH(si′, 0)), TFtHtj(LH(sj, 0)) ≥ TFtHtj(LH(sj′, 0)) for all i = 1, ⋯, n,

1p+qi=1nj=inpΔ1TFtHtiLHsi,0+qΔ1TFtHtjLHsj,02/nn+11p+qi=1nj=inpΔ1TFtHtiLHsi,0+qΔ1TFtHtjLHsj,02/nn+1 (A.1)

Thus, ULGGHMp,q((s1, 0), ⋯, (sn, 0)) ≥ ULGGHMp,q((s1′, 0), ⋯, (sn′, 0)).

(2) Since (si, 0) = (s, 0) for all i = 1, ⋯, n, then

ULGGHMp,qs1,0,,sn,0=LH1Δ1p+qi=1nj=inpΔ1TFtHtiLHsi,0+qΔ1TFtHtjLHsj,02/nn+1=LH1ΔΔ1TFtHtiLHs,0=s,0. (A.2)

(3) According to the property of idempotency and monotonicity, we have (s, 0) = ULGGHMp,q((s, 0), ⋯, (s, 0)) ≤ ULGAHMp,q((s1, 0), ⋯, (sn, 0)), (s, 0) = ULGAHMp,q((s, 0), ⋯, (s, 0)) ≥ ULGAHMp,q((s1, 0), ⋯, (sn, 0)); i.e., (s, 0) ≤ ULGAHMp,q((s1, 0), ⋯, (sn, 0)) ≤ (s, 0).

B. The Proof of Theorem 22

Proof —

(1) Since w1 = ⋯ = wn = 1/n, according to (41), we have

ULGWAHMp,qs1,0,,sn,0=LH1Δi=1nj=in1/n·Δ1TFtHtiLHsi,0p1/n·Δ1TFtHtjLHsj,0q1/p+qi=1nj=in1/np1/nq1/p+q=LH1Δ1/ni=1nj=inΔ1TFtHtiLHsi,0pΔ1TFtHtjLHsj,0q1/p+q1/n·nn+1/21/p+q=LH1Δ2nn+1i=1nj=inΔ1TFtHtiLHsi,0pΔ1TFtHtjLHsj,0q1/p+q=ULGAHMp,qs1,0,,sn,0. (B.1)

(2) Since (si, 0) ≥ (si′, 0) for all i = 1, ⋯, n, p, q ≥ 0.

Then

wi·Δ1TFtHtiLHsi,0pwi·Δ1TFtHtiLHsi,0p,wj·Δ1TFtHtjLHsj,0qwj·Δ1TFtHtjLHsj,0q. (B.2)

Further,

i=1nj=inwi·Δ1TFtHtiLHsi,0pwj·Δ1TFtHtjLHsj,0q1/p+qi=1nj=inwipwjq1/p+qi=1nj=inwi·Δ1TFtHtiLHsi,0pwj·Δ1TFtHtjLHsj,0q1/p+qi=1nj=inwipwjq1/p+q (B.3)

Thus, ULGWAHMp,q((s1, 0), ⋯, (sn, 0)) ≥ ULGWAHMp,q((s1′, 0), ⋯, (sn′, 0)).

(3) Since (si, 0) = (s, 0) for all i = 1, ⋯, n, then

ULGWAHMp,qs1,0,,sn,0=LH1Δi=1nj=inwi·Δ1TFtHtLHs,0pwj·Δ1TFtHtLHs,0q1/p+qi=1nj=inwipwjq1/p+q=LH1ΔΔ1TFtHtiLHs,0=s,0. (B.4)

(4) According to the property of idempotency and monotonicity, we can get (s, 0) ≤ ULGWAHMp,q((s1, 0), ⋯, (sn, 0)) ≤ (s, 0).

C. Some Special Cases of the ULGWAHM Operator in regard to Parameters p and q

  • (1)
    If q → 0, then
    ULGWAHMp,0s1,0,,sn,0=LH1Δi=1nj=inwi·Δ1TFtHtiLHsi,0pwj·Δ1TFtHtjLHsj,0q1/p+qi=1nj=inwipwjq1/p+q=LH1Δi=1nj=inwi·Δ1TFtHtiLHsi,0p1/pi=1nj=inwip1/p=LH1Δi=1nn+i1wipi=1nn+i1wipΔ1TFtHtiLHsi,0p1/p=i=1nn+i1wipi=1nn+i1wipsi,0p1/p, (C.1)
  • which is called the unbalanced linguistic generalized weighted mean (ULGWM) operator.

  • (2)
    If p → 0, then
    ULGWAHM0,qs1,0,,sn,0=LH1Δi=1nj=inwi·Δ1TFtHtiLHsi,0pwj·Δ1TFtHtjLHsj,0q1/p+qi=1nj=inwipwjq1/p+q=LH1Δi=1nj=inwj·Δ1TFtHtjLHsj,0q1/qi=1nj=inwjq1/q=LH1Δj=1njwjqj=1njwjq·Δ1TFtHtjLHsj,0q1/q=j=1njwjqj=1njwjqsj,0q1/q. (C.2)
  • The ULGWAHM reduces to the unbalanced linguistic generalized weighted mean (ULGWM) operator.

  • (3)
    If p = q = 1, then
    ULGWAHM1,1s1,0,,sn,0=LH1Δi=1nj=inwi·Δ1TFtHtiLHsi,0wj·Δ1TFtHtjLHsj,01/2i=1nj=inwiwj1/2=i=1nj=inwii=1nj=inwiwjsi,01/2wji=1nj=inwiwjsj,01/2. (C.3)
  • (4)
    If p = q = 1/2, we obtain
    ULGWAHM1/2,1/2s1,0,,sn,0=LH1Δi=1nj=inwi·Δ1TFtHtiLHsi,01/2wj·Δ1TFtHtjLHsj,01/2i=1nj=inwi1/2wj1/2=i=1nj=inwi1/2i=1nj=inwi1/2wj1/2si,01/2wj1/2i=1nj=inwi1/2wj1/2sj,01/2, (C.4)
  • which we call the unbalanced linguistic general weighted mean (ULGWM) operator.

D. Some Special Cases of the ULGWGHM Operator in regard to Parameters p and q

  • (1)
    If q → 0, then
    =LH1Δ1pi=1nj=inp·Δ1TFtHtisi,02ni+1/nn+1·wj/k=inwk=LH1Δi=1nΔ1TFtHtiLHsi,02n+i1/nn+1=i=1nsi,02ni+1/nn+1, (D.1)
  • which we call the unbalanced linguistic geometric mean (ULGM) operator with the descending weight vector. It has no relationship with p while q → 0.

  • (2)
    If p → 0, then
    ULGWGHM0,qs1,0,,sn,0=LH1Δ1qi=1nj=inq·Δ1TFtHtjsj,02ni+1/nn+1·wj/k=inwk=LH1Δi=1nj=inΔ1TFtHtjsj,02ni+1/nn+1·wj/k=inwk=i=1nj=insj,02ni+1/nn+1·wj/k=inwk. (D.2)
  • The ULGWGHM operator reduces to the unbalanced linguistic weighted geometric mean (ULWGM) operator. It has no relationship with q while p → 0.

  • (3)
    If p = q = 1, then
    ULGWGHM1,1s1,0,,sn,0=LH1Δ12i=1nj=inΔ1TFtHtisi,0+Δ1TFtHtjsj,02ni+1/nn+1·wj/k=inwk=12i=1nj=insi,0sj,02ni+1/nn+1·wj/k=inwk. (D.3)
  • (4)
    If p = q = 1/2, then
    ULGWGHM1/2,1/2s1,0,,sn,0=LH1Δ12i=1nj=inΔ1TFtHtisi,0+Δ1TFtHtjsj,02ni+1/nn+1·wj/k=inwk=12si,012sj,02ni+1/nn+1·wj/k=inwk, (D.4)
  • which we call unbalanced linguistic weighted geometric Heronian mean (ULWGHM) operator in this case.

Conflicts of Interest

There are no conflicts of interest.

References

  • 1.Guo K., Li W. Combination rule of D-S evidence theory based on the strategy of cross merging between evidences. Expert Systems with Applications. 2011;38(10):13360–13366. doi: 10.1016/j.eswa.2011.04.161. [DOI] [Google Scholar]
  • 2.Yager R. R. On ordered weighted averaging aggregation operators in multicriteria decisionmaking. The Institute of Electrical and Electronics Engineers Systems, Man, and Cybernetics Society. 1988;18(1):183–190. doi: 10.1109/21.87068. [DOI] [Google Scholar]
  • 3.Xu Z. S., Da Q. L. The ordered weighted geometric averaging operators. International Journal of Intelligent Systems. 2002;17(7):709–716. doi: 10.1002/int.10045. [DOI] [Google Scholar]
  • 4.Xu Z. Dependent uncertain ordered weighted aggregation operators. Information Fusion. 2008;9(2):310–316. doi: 10.1016/j.inffus.2006.10.008. [DOI] [Google Scholar]
  • 5.Xu Z. S., Da Q. L. The uncertain OWA operator. International Journal of Intelligent Systems. 2002;17(6):569–575. doi: 10.1002/int.10038. [DOI] [Google Scholar]
  • 6.Yager R. R. Generalized OWA aggregation operators. Fuzzy Optimization and Decision Making. 2004;3(1):93–107. doi: 10.1023/B:FODM.0000013074.68765.97. [DOI] [Google Scholar]
  • 7.Zhou L.-G., Chen H.-Y. Continuous generalized OWA operator and its application to decision making. Fuzzy Sets and Systems. 2011;168:18–34. doi: 10.1016/j.fss.2010.05.009. [DOI] [Google Scholar]
  • 8.Merigó J. M., Casanovas M. Induced and uncertain heavy OWA operators. Computers & Industrial Engineering. 2011;60(1):106–116. doi: 10.1016/j.cie.2010.10.005. [DOI] [Google Scholar]
  • 9.Merigó J. M., Gil-Lafuente A. M. The induced generalized OWA operator. Information Sciences. 2009;179(6):729–741. doi: 10.1016/j.ins.2008.11.013. [DOI] [Google Scholar]
  • 10.Liao H., Xu Z. Some new hybrid weighted aggregation operators under hesitant fuzzy multi-criteria decision making environment. Journal of Intelligent & Fuzzy Systems. 2014;26(4):1601–1617. [Google Scholar]
  • 11.Liao H., Xu Z. Extended hesitant fuzzy hybrid weighted aggregation operators and their application in decision making. Soft Computing. 2015;19(9):2551–2564. doi: 10.1007/s00500-014-1422-6. [DOI] [Google Scholar]
  • 12.Liu P., Wang P. Some q-Rung Orthopair Fuzzy Aggregation Operators and their Applications to Multiple-Attribute Decision Making. International Journal of Intelligent Systems. 2018;33(2):259–280. doi: 10.1002/int.21927. [DOI] [Google Scholar]
  • 13.Beliakov G., Pradera A., Calvo T. Aggregation Functions: A Guide for Practitioners. Berlin, Germany: Springer; 2007. [Google Scholar]
  • 14.Sykora S. Mathematical Means and Averages: Generalized Heronian Means. Sykora S. Stan’s Library; 2009. [Google Scholar]
  • 15.Sykora S. Generalized Heronian Means II. Sykora S. Stans Library; 2009. [Google Scholar]
  • 16.Yu D. J. Intuitionistic fuzzy geometric Heronian mean aggregation operators. Applied Soft Computing. 2013;13(2):1235–1246. doi: 10.1016/j.asoc.2012.09.021. [DOI] [Google Scholar]
  • 17.Liu P., Liu Z., Zhang X. Some intuitionistic uncertain linguistic Heronian mean operators and their application to group decision making. Applied Mathematics and Computation. 2014;230:570–586. doi: 10.1016/j.amc.2013.12.133. [DOI] [Google Scholar]
  • 18.Chu Y., Liu P. Some two-dimensional uncertain linguistic Heronian mean operators and their application in multiple-attribute decision making. Neural Computing and Applications. 2015;26(6):1461–1480. doi: 10.1007/s00521-014-1813-8. [DOI] [Google Scholar]
  • 19.Liu P., Liu J., Merigó J. M. Partitioned Heronian means based on linguistic intuitionistic fuzzy numbers for dealing with multi-attribute group decision making. Applied Soft Computing. 2018;62:395–422. doi: 10.1016/j.asoc.2017.10.017. [DOI] [Google Scholar]
  • 20.Liu P., Chen S. M. Group decision making based on Heronian aggregation operators of intuitionistic fuzzy numbers. IEEE Transactions on Cybernetics. 2017;47(9):2514–2530. doi: 10.1109/TCYB.2016.2634599. [DOI] [PubMed] [Google Scholar]
  • 21.Xu Z. Uncertain Multi-Attribute Decision Making: Methods and Applications. Tsinghua University; 2015. [DOI] [Google Scholar]
  • 22.Xu Z.-S., Chen J. An interactive method for fuzzy multiple attribute group decision making. Information Sciences. 2007;177(1):248–263. doi: 10.1016/j.ins.2006.03.001. [DOI] [Google Scholar]
  • 23.Zadeh L. A. The concept of a linguistic variable and its application to approximate reasoning I. Information Sciences. 1975;8:199–249. doi: 10.1016/0020-0255(75)90036-5. [DOI] [Google Scholar]
  • 24.Liu P., Chen S.-M. Multiattribute group decision making based on intuitionistic 2-tuple linguistic information. Information Sciences. 2018;430/431:599–619. doi: 10.1016/j.ins.2017.11.059. [DOI] [Google Scholar]
  • 25.Liu P. Multiple attribute group decision making method based on interval-valued intuitionistic fuzzy power Heronian aggregation operators. Computers & Industrial Engineering. 2017;108:199–212. doi: 10.1016/j.cie.2017.04.033. [DOI] [Google Scholar]
  • 26.Liu P. D., Liu J., Chen S. M. Some intuitionistic fuzzy Dombi Bonferroni mean operators and their application to multi-attribute group decision making. Journal of the Operational Research Society. 2018;69(1):p. 24. [Google Scholar]
  • 27.Liu P., Chen S.-M., Liu J. Multiple attribute group decision making based on intuitionistic fuzzy interaction partitioned Bonferroni mean operators. Information Sciences. 2017;411:98–121. doi: 10.1016/j.ins.2017.05.016. [DOI] [Google Scholar]
  • 28.Liu P., Li H. Interval-valued intuitionistic fuzzy power bonferroni aggregation operators and their application to group decision making. Cognitive Computation. 2017:1–19. doi: 10.1007/s12559-017-9453-9. [DOI] [Google Scholar]
  • 29.Merigó J. M., Casanovas M., Martínez L. Linguistic aggregation operators for linguistic decision making based on the Dempster-Shafer theory of evidence. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems. 2010;18(3):287–304. doi: 10.1142/S0218488510006544. [DOI] [Google Scholar]
  • 30.Pei Z., Ruan D., Liu J., Xu Y. A linguistic aggregation operator with three kinds of weights for nuclear safeguards evaluation. Knowledge-Based Systems. 2012;28:19–26. doi: 10.1016/j.knosys.2011.10.016. [DOI] [Google Scholar]
  • 31.Wei G., Zhao X. Some dependent aggregation operators with 2-tuple linguistic information and their application to multiple attribute group decision making. Expert Systems with Applications. 2012;39(5):5881–5886. doi: 10.1016/j.eswa.2011.11.120. [DOI] [Google Scholar]
  • 32.Wan S.-P. 2-Tuple linguistic hybrid arithmetic aggregation operators and application to multi-attribute group decision making. Knowledge-Based Systems. 2013;45:31–40. doi: 10.1016/j.knosys.2013.02.002. [DOI] [Google Scholar]
  • 33.Wan S.-P. Some hybrid geometric aggregation operators with 2-tuple linguistic information and their applications to multi-attribute group decision making. International Journal of Computational Intelligence Systems. 2013;6(4):750–763. doi: 10.1080/18756891.2013.804144. [DOI] [Google Scholar]
  • 34.Xu Z. A method based on linguistic aggregation operators for group decision making with linguistic preference relations. Information Sciences. 2004;166(1-4):19–30. doi: 10.1016/j.ins.2003.10.006. [DOI] [Google Scholar]
  • 35.Pang Q., Wang H., Xu Z. Probabilistic linguistic term sets in multi-attribute group decision making. Information Sciences. 2016;369:128–143. doi: 10.1016/j.ins.2016.06.021. [DOI] [Google Scholar]
  • 36.Wang J. Q., Li J. J. The multi-criteria group decision making method based on multi-granularity intuitionistic two semantics. Science and Technology Information. 2009;33:8–9. [Google Scholar]
  • 37.Herrera F., Martínez L. A 2-tuple fuzzy linguistic representation model for computing with words. IEEE Transactions on Fuzzy Systems. 2000;8(6):746–752. doi: 10.1109/91.890332. [DOI] [Google Scholar]
  • 38.Liao H., Xu Z., Zeng X.-J. Distance and similarity measures for hesitant fuzzy linguistic term sets and their application in multi-criteria decision making. Information Sciences. 2014;271:125–142. doi: 10.1016/j.ins.2014.02.125. [DOI] [Google Scholar]
  • 39.Xu Z., Wang H. On the syntax and semantics of virtual linguistic terms for information fusion in decision making. Information Fusion. 2017;34:43–48. doi: 10.1016/j.inffus.2016.06.002. [DOI] [Google Scholar]
  • 40.Rodriguez R. M., Martinez L., Herrera F. Hesitant fuzzy linguistic term sets for decision making. IEEE Transactions on Fuzzy Systems. 2012;20(1):109–119. doi: 10.1109/TFUZZ.2011.2170076. [DOI] [Google Scholar]
  • 41.Herrera-Viedma E., López-Herrera A. G. A model of an information retrieval system with unbalanced fuzzy linguistic information. International Journal of Intelligent Systems. 2007;22(11):1197–1214. doi: 10.1002/int.20244. [DOI] [Google Scholar]
  • 42.Herrera F., Herrera-Viedma E., Martínez L. A fuzzy linguistic methodology to deal with unbalanced linguistic term sets. IEEE Transactions on Fuzzy Systems. 2008;16(2):354–370. doi: 10.1109/TFUZZ.2007.896353. [DOI] [Google Scholar]
  • 43.Zou L., Pei Z., Karimi H. R., Shi P. The unbalanced linguistic aggregation operator in group decision making. Mathematical Problems in Engineering. 2012:Art. ID 619162, 12. [Google Scholar]
  • 44.Meng D., Pei Z. On weighted unbalanced linguistic aggregation operators in group decision making. Information Sciences. 2013;223:31–41. doi: 10.1016/j.ins.2012.09.032. [DOI] [Google Scholar]
  • 45.Jiang L., Liu H., Cai J. The power average operator for unbalanced linguistic term sets. Information Fusion. 2015;22:85–94. doi: 10.1016/j.inffus.2014.06.002. [DOI] [Google Scholar]
  • 46.Pei Z., Shi P. Fuzzy risk analysis based on linguistic aggregation, operators. International Journal of Innovative Computing, Information and Control. 2011;7(12):7105–7118. [Google Scholar]
  • 47.Cordón O., Herrera F., Zwir I. Linguistic modeling by hierarchical systems of linguistic rules. IEEE Transactions on Fuzzy Systems. 2002;10(1):2–20. doi: 10.1109/91.983275. [DOI] [Google Scholar]
  • 48.Martínez L., Rodriguez R. M., Herrera F. A 2-Tuple Linguistic Model Computing with Words in Decision Making. Springer International Publishing; 2015. [Google Scholar]
  • 49.Herrera F., Martínez L. A model based on linguistic 2-tuples for dealing with multigranular hierarchical linguistic contexts in multi-expert decision-making. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics. 2001;31(2):227–234. doi: 10.1109/3477.915345. [DOI] [PubMed] [Google Scholar]
  • 50.Hui L., Zhou J. W. Linguistic multi-Attribute group decision making with risk preferences and its use in low-carbon tourism destination selection. International Journal of Environment Research and Publication Health. 2017;14(1078):1–14. doi: 10.3390/ijerph14091078. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Liu J. P., Lin S., Chen H. Y. 2-tuple linguistic Bonferroni aggregation operators and their applications to multi-attribute group decision making. Operations research and Management Science. 2013;22(5):122–127. [Google Scholar]

Articles from Computational Intelligence and Neuroscience are provided here courtesy of Wiley

RESOURCES