Abstract
Dunbar's number is the cognitive limit of an individual to maintain stable relationships with others in his network. It is based on the size of the neocortex of the human brain. On the other hand, trust is one of the major issues for one while selecting members for his social network and the evolution of his social network with time. Trust and Dunbar's number are interconnected in the case of one's stable social network. Trust needs time to be built after several social interactions, intimacy, etc. In this paper, we try to provide answers to the following important questions related to social networks:
(i) Do trust levels remain the same for individuals from one's perspective in his social network when the network size increases?
(ii) What is the relation between the power-law exponent α and the trust cutoff?
(iii) Do trust levels help to diffuse information quickly or vice versa to reach Dunbar's number 150 along with hierarchy layers of 5, 15, and 50 individuals in networks of different sizes?
We find that there is a requirement for trust levels to increase among the same individuals in one's social network if the size of the network increases. As a relation between the power-law exponent α and the trust cutoff, it is found that 1/(trust cutoff). Moreover, we also find that trust levels never help to diffuse information quickly or vice versa to reach Dunbar's number 150, along with hierarchy layers of 5, 15, and 50 individuals in networks of different sizes.
Keywords: Social networks, Uniform distribution, Power-law distribution, Power-law exponent, Trust, Dunbar's number, Evolutionary biology, Neocortex
1. Introduction
“Social networks” is an area of interdisciplinary research that was developed from the ideas of graph theory [1] in the last century. Erdös and Rényi [2] laid the theoretical foundation of random graphs in 1960. Prior to 1999, it was believed that the social networks of individuals were nothing but random graphs [3]. In 1999, Barabási and Albert [3] disproved the existence of random graphs in several online networks, along with biological networks. After Barabási and Albert [3], the research area “Social networks” has grown to an interdisciplinary area. It is important to note that renowned graph theorist Frank Harary realized the importance of social networks several decades ago and thus made several contributions in Refs. [4,5]. Moreover, it is easy to find connections between topology and social networks. One may refer to Refs. [[6], [7], [8], [9], [10], [11], [12], [13]] for recent research on topological aspects of social networks. Also [14,15], can be considered to find connections between social networks and computer science.
Trust helps to grow human social relationships [16]. It binds individuals through benevolent acts and reciprocal interactions. Thus, it becomes the basis of friendship [16,17]. According to Mitchell [18], trust is governed by social exchanges, viz., social presence, verbal and nonverbal communication, etc. In these types of social exchanges, the honesty or deception of the other party is assessed. A psychological definition of trust can be found in Rotter [19]. In Ref. [19], Rotter defined trust as “an expectancy held by an individual or a group that the word, promise, verbal or written statement of another individual or group can be relied on”. Similarly, psychological definition of deception is stated as “a successful or unsuccessful deliberate attempt, without forewarning, to create in another a belief that the communicator considers to be untrue in order to increase the communicator's payoff at the expanse of the other side” [20].
Several theories on human friendship have suggested the importance of reciprocity and the exchange of benevolent acts in building social relationships [16,21,22]. Trust is one of the implicit factors in social relationships [16,21,22]. Experimental study by Hays [21] showed the advantages of investing resources on fewer but more intimate social relationships. Moreover, Granovetter [22] showed that weak ties often help to get more information and resources. According to Baumeister and Leary [23], mutual trust starts to grow between individuals, which leads to reliance on each other for emotional support, companionship, help, etc., when collaborations start to grow between the groups. Trust evolves to enhance collaboration in assessing the quality of trustworthiness or deception [24]. Hays [25,26] observed that close friendships have more weekly interactions, across a wider range of days, locations, and times than casual friendships. Close friends offer more informal and moral support than casual friends. Moreover, different dimensions for trust-oriented behaviors in friendships were classified, viz., positivity, openness, interaction and supportiveness. According to Sutcliffe et al. [27], best-friendships are found to be self-maintained. Roberts and Renwick [28] showed that the reputations of individuals based on their histories of collaboration help to build their social relationships.
Nowak and Sigmund [29] established that, in the case of a computer simulated repeated prisoner's dilemma, cooperative strategies spread in populations if histories of interactions are assessable. In the case of socio-cognitive models of trust, several factors influence trusting relationships [30,31]. Falcone and Castelfranchi [30,31] identified the degree of delegation between two parties, risks, the motivations and goals shared by parties for social relationships, and reputations as the factors that influence trusting relationships. One may refer to Castelfranchi and Falcone [32] for more elaboration on social trust. Moreover, it is found that indirect reciprocity encourages social relationships and helps to form groups [33].
There are several studies on trust from the computational aspects. Sutcliffe and Wang [34] described the process of the development of social relationships using a computational model. Sutcliffe et al. [17] presented a social trust model to investigate the relationship between social networks and social relationships in real life and in social media. They found the existence of a few strong relationships in multilevel social structures. Their results demonstrated the existence of more medium ties and a large number of weak ties if one rewards well-being and alliances with high-level social interactions. Interested readers may refer to Ref. [17] for extensive literature on trust and related computational models. For recent research on trust in the cases of Facebook and Twitter, one may refer to Sabatini and Sarracino [35] and many others.
Both trust and deception are topics of equal interest to social scientists, economists, and network scientists. Gneezy [36] discussed the role of consequences through deception in economics. According to Gneezy [36], most economic interactions include deceptions. The extreme condition of economic theory assumes each agent is “homo economicus” during economic interactions. It means that each agent acts selfishly during economic interactions and does not bother about the well-being of others. Thus, deception is one of the major properties of selfish acts by an economic agent. For example, Akerlof [37] found that car sellers always lie for their own benefit in cases of asymmetric information. A similar argument in mechanism design [38] says that people trust if there are scopes for material outcomes as incentives. From the perspective of utilitarian philosophy, Bentham [39] argued that an individual must weigh benefits against harm, happiness against unhappiness, etc. before telling a lie. But a contradictory view of utilitarian philosophy came from Bok [40]. According to Bok [40], there is no difference between a truthful statement and a lie if both achieve the same monetary payoff. Thus, from Refs. [39,40], can we claim that we must be cautious when trusting individuals in our social networks?
There are four types of lying [36,41]. They are (i) pro-social; (ii) self-enhancement; (iii) selfish; and (iv) antisocial. Gneezy [36] suggested that people consider gain at the time of telling lies. Moreover, people also care about the loses of others along with their own gains. Serota et al. [42] conducted an experiment with 1000 Americans, and it was found that on average 1.65 lies were told by each American per day. Similarly, other experiments [[43], [44], [45], [46]] proved that lies were told between frequency 0.6–2.0 per day, where face-to-face intersections had a lower chance of deception in comparison to telephonic conversations. Gino and Pierce [47] conducted experiments and found that people opt for dishonest behavior to relieve emotional distress caused by wealth-based inequity. Moreover, people increase hurting behavior and reduce helping behavior in the case of their own experiences of negative inequity or negative emotions, and they also increase dishonest helping behavior when they experience positive inequity or positive emotions [47]. Arnaboldi et al. [48] modelled the trusted information diffusion pattern in online social networks with the assumption that two nodes had trusted ties with reciprocal behavior. They showed that up to only 3% of ties are used in the case of information diffusion if there are strong ties in online social networks. They also found that in the case of trustworthy paths for communications in online social networks, two individuals are more than twice as far away from each other. Wang et al. [49] experimentally proved that badness influences psychological reactions more than goodness, but goodness influences behavior more than badness. Similar results were predicted by Panasiti et al. [50]. In the case of deception, one's own reputation encourages other individuals in his social networks to be honest [50]. Moreover, they found that unfavorable situations motivate one to deceive in a stronger way. Gino et al. [51] conducted three experiments and concluded that individuals cheat more when their lies benefit others and when the number of beneficiaries from their lies increases. There are several studies on trust and deception. Trust and deception have a very strong connection with individuals' social networks in terms of Dunbar's number.
There are mainly two approaches to studying social networks. One is “six degrees of separation” or “small world phenomenon” [52,53] and the other is Dunbar's number [54]. Small world phenomenon focuses on the maximum number of intermediate acquaintance links needed to connect any two individuals. On the other hand, Dunbar's number [54,55] is based on human cognition's limit to maintaining stable relationships with individuals in one's social network. Dunbar's number is one of the fundamental factors of ego-centric social networks. Based on correlative studies of the size of social networks and neocortex sizes in the brains of primates and humans, Dunbar [54] found that an individual can maintain stable relationships with only 150 individuals in his social networks. This was previously found for offline social networks, but the existence of the same result was confirmed, after the evolution of the internet, in the case of online social networks [25,56]. Dunbar's hierarchy layers consist of alters with scaling ratio of 3 [57,58]. These hierarchy layers consist of a series of concentric circles of acquaintanceship [56,59]. An individual is placed at the centre of these hierarchy layers, and the remaining concentric circles consist of 5, 15, 50, 150, 500, 1500, and 5000 individuals cumulatively [56,59]. The hierarchy layers consist of 5 and 15 individuals are called support cliques and sympathy groups, respectively. Stiller and Dunbar [59] found that individual differences in social cognition explain the size of an individual's support clique in a better way. On the other hand, the size of the sympathy group is well explained by an individual's performance on memory tasks. Moreover, Dunbar [60] showed that six degrees of separation does not necessarily mean that information flows at a uniform rate in one's social network. For recent review on Dunbar's number, one may refer to Ref. [61]. From Refs. [49,60], we can ask the following questions:
-
(i)
Do trust levels remain the same for individuals from one's perspective in his social network when the network size increases?
-
(ii)
What is the relation between the power-law exponent α and the trust cutoff?
-
(iii)
Do trust levels help to diffuse information quickly or vice versa to reach Dunbar's number 150 along with hierarchy layers of 5, 15, and 50 individuals in networks of different sizes?
2. Uniform distribution of trust
In this section, we try to find the nature of one's trust levels with acquaintances in his social network when the size of the network starts to increase. Let us consider a social network with a population of (>1) individuals and each of them has an initial trust value , where [0,1]. We consider a constant [0, 1] as the critical value of trust for an individual, above which he is susceptible to receiving information as well as transmitting information to others. The individuals below the critical value of trust are considered non-reliable by their neighbors, and thus they don't participate in information transmission. Based on individuals' roles in information transmission, we divide the population into three categories. They are given below.
-
1.
Susceptible (),
-
2.
Ignorant (),
-
3.
Transmitters ().
The susceptible () people are those individuals who can receive information and become a transmitter (). The ignorant () category consists of individuals with trust value below the critical value of trust . We assume as the rate of diffusion with which a transmitter sends information to neighboring susceptible individuals. In this paper, we are considering two types of distribution for trust on individuals. One is uniform distribution, and the other is power-law distribution.
Now, we consider that all individuals are assigned trust values, which are selected from a uniform distribution in [0, 1]. Let be the fraction of people with a trust value greater than or equal to . Hence, the effective number of susceptible individuals is . In the case of uniform distribution, the fraction of the population that participates in information diffusion is = 1- and the fraction of the non-reliable or ignorant population is 1- = . Thus, dynamic equations regarding the information diffusion in our case are given below.
(1) |
(2) |
(3) |
To normalize the dynamics equations, we consider . Hence, we obtain = . Now, we consider = . So, at any instant , we obtain = 1- - . If we denote = , then + + = 1. Thus, dynamic equations (1), (2), (3) are reduced to the following equations (4), (5), (6):
(4) |
(5) |
(6) |
Since = 1- - , thus equation (6) becomes
(7) |
We consider = at = 0. As is time dependent, hence the solution of (7) becomes
(8) |
It is a logistic solution of equation (7) with an initial fraction of transmitters as . If → , then () → 1 – . Thus, every susceptible individual gets the information and becomes a transmitter. When = 0 at = 0, then equation (8) reduces to the following:
(9) |
Thus, from equation (9), we obtain the fraction of transmitters at any instance with an initial fraction of transmitters in our social network with (>1) individuals.
3. Power-law distribution of trust
According to Barabási and Albert [3], large networks, viz., the WWW, citation patterns in science, the collaboration graph of movie actors, etc., are scale-free networks. In a scale-free network, the probability that a vertex in the network interacts with other vertices decays as the power-law is directly proportional to , i.e., , with an additional requirement of .
Prior to Ref. [3], Redner [62] found the existence of power-law distribution in the citation distribution of published papers having the power-law exponent 3. In order to generate a power-law distribution of trust with an exponent from a uniform distribution in [, ], the random variable is given by,
(10) |
Equation (10) is the probability distribution function for generating power-law with as the minimum value and as the maximum value of . The minimum value of cannot be zero for a power-law distribution. So, we are fixing the minimum value of as 0.1 and the maximum value of as 1. Now, the minimum value of = 0.1 means the minimum value of trust in the distribution cannot be lower than 0.1. For the critical value of trust , in order to find in the case of power-law distribution, the fraction of individuals with trust values less than is obtained and is subtracted from 1. The probability distribution curve is obtained for 106 values of [0.1, 1] to get a smooth curve for power-law distribution with the power-law exponent . Using the data from the probability distribution of the power-law and by fixing , we can obtain the initial fraction of population who are susceptible with trust values greater or equal to . The fraction can be used to find the number of susceptible people in the information transmission by Thus, fraction of people in the ignorant category is = 1 – . The remaining dynamics and equations are the same as for a uniform trust distribution.
4. Results and discussions
This section consists of discussions regarding Fig. 1, Fig. 2, Fig. 3, Fig. 4, Fig. 5. The following conclusions are drawn from simulations. They are providing answers to the questions of Section 1.
Fig. 1.
(i) Diffusion of information among 150 people with different trust cutoff values and β = 0.25, (ii) diffusion of information among 150 people with different trust cutoff values and β = 0.50, (iii) total informed people with different trust cutoff values between 0 and 1, with an increment of 0.01 to visualize Dunbar's hierarchy levels of stability and required trust values for 150 people, (iv) diffusion of information among 500 people with different trust cutoff values and β = 0.25, (v) diffusion of information among 500 people with different trust cutoff values and β = 0.50, (vi) total informed people with different trust cutoff values between 0 and 1, with an increment of 0.01 to visualize Dunbar's hierarchy levels of stability and required trust values for 500 people.
Fig. 2.
(i) Diffusion of information among 1500 people with different trust cutoff values and β = 0.25, (ii) diffusion of information among 1500 people with different trust cutoff values and β = 0.50, (iii) total informed people with different trust cutoff values between 0 and 1, with an increment of 0.01 to visualize Dunbar's hierarchy levels of stability and required trust values for 1500 people, (iv) diffusion of information among 5000 people with different trust cutoff values and β = 0.25, (v) diffusion of information among 5000 people with different trust cutoff values and β = 0.50, (vi) total informed people with different trust cutoff values between 0 and 1, with an increment of 0.01 to visualize Dunbar's hierarchy levels of stability and required trust values for 5000 people.
Fig. 3.
(i) To maintain trust among the same 5 people; trust levels require 0.97 and 0.99 once the network size increases to 150 and 500 people, (ii) to maintain trust among the same 15 people; trust levels require 0.90 and 0.97 once the network size increases to 150 and 500 people, (iii) to maintain trust among the same 50 people; trust levels require 0.66 and 0.90 once the network size increases to 150 and 500 people, (iv) to maintain trust among the same 150 people; the trust level increases from 0.70 for a network of 500 people to 0.90 for a network of 1500 people, (v) to maintain trust among the same 500 people; the trust level increases from 0.68 for a network of 1500 people to 0.90 for a network of 5000 people.
Fig. 4.
Trust cutoff having = 2.1, = 1 along with (i) = 0.25 and = 150, (ii) = 0.50 and = 150, (iii) = 0.25 and = 500, (iv) = 0.50 and = 500, (v) = 0.25 and = 1500, (vi) = 0.50 and = 1500, (vii) = 0.25 and = 5000, (viii) = 0.50 and = 5000.
Fig. 5.
The power-law distribution having (i) trust cutoffs with = 2.1 to 2.9 when = 150, (ii) trust cutoffs with = 2.1 to 2.9 when = 500, (iii) trust cutoffs with = 2.1 to 2.9 when = 1500, (iv) trust cutoffs with = 2.1 to 2.9 when = 5000.
We first considered equation (9) to do simulations. For simulations, we used the programming language R version 4.3.0. Since, we are considering Dunbar's hierarchy, thus initially at time = 0, there is only one transmitter of an information. The one transmitter of information is due to the existence of ‘ego’ in Dunbar's hierarchy [60].
Now, we discuss Fig. 1 (i-vi). For (i) to (iii) of Fig. 1, we are considering a network of 150 (=) individuals with uniform trust distribution. Information is transmitted with transmission rates β = 0.25 and β = 0.50 in (i) and (ii), respectively, with = 1 at = 0, i.e., = 1. The total informed people level reaches Dunbar's hierarchy levels of 50, 15, and 5 at trust values of 0.66, 0.90, and 0.96, respectively. In (ii), the diffusion occurs quicker with β = 0.50 than with β = 0.25 in (i). To get (iii) of Fig. 1 and similar figures, we used the simulation methodologies described in Section 2. In Fig. 1, (iii) represents the total number of informed people for all the trust cutoff values between 0 and 1, with an increment of 0.01 so as to easily visualize Dunbar's hierarchy levels of stability and required trust values. In the cases of (iv) to (vi) of Fig. 1, we are considering a network of 500 (=) individuals with uniform trust distribution. Information transmission rates are the same in (iv) and (v) as in (i) to (iii) in Fig. 1. Now, we can see from (vi) that at trust values 0.7 and 0.9, the total informed people level reaches Dunbar's hierarchy levels of 150 and 50, respectively. Moreover, like a network of = 150 individuals, the diffusion of information occurs quicker at β = 0.50 than at β = 0.25 for a network of = 500 individuals.
In this paragraph, we analyze Fig. 2 (i-vi). We consider a network of 1500 (=) individuals with a uniform trust distribution for (i) and (ii) of Fig. 2. We can see that at a trust value of 0.9, the total informed people level reaches Dunbar's hierarchy level of 150. Like the previous two networks, it can be concluded from (i) and (ii) of Fig. 2 that information diffusion occurs quicker at β = 0.50 than at β = 0.25. As we extend our network size from 1500 to 5000 in (iii) to (vi) of Fig. 2, we find from (vi) that trust level increases to 0.97 to reach the stable connections of the network, i.e., Dunbar's number 150. Moreover, information diffusion properties remain the same for β = 0.50 as in our previous networks. From the above four network sizes, it is important to note that trust levels remain the same irrespective of the information diffusion rate in a network. It means that trust levels never help to diffuse information quickly or vice versa to reach Dunbar's number 150, along with hierarchy layers of 5, 15, and 50 individuals in networks of different sizes.
In this paragraph, we analyze Fig. 3(i–v). From (i)–(v) of Fig. 3, we show that requirements for trust levels increase for the same individuals in one's network once the network size increases. From (i) of Fig. 3, to maintain trust among the same five individuals, trust levels require 0.97 and 0.99 once the network size increases to 150 and 500 individuals, respectively. Similarly, to maintain trust among the same 15 individuals, trust levels require 0.90 and 0.97 once the network size increases to 150 and 500 individuals, respectively. Again from (iii) of Fig. 3, to maintain trust among the same 50 individuals, trust levels require 0.66 and 0.90 once the network size increases to 150 and 500 individuals, respectively. However, the trust levels reach 1 asymptotically if the network size increases to ∼5000 individuals. In order to maintain trust among the same 150 individuals, the trust level increases from 0.70 for a network of 500 individuals to 0.90 for a network of 1500 individuals. Although one cannot maintain stable relationships with more than 150 individuals, an increase in trust level is observed in (v) of Fig. 2. Thus, (i)–(v) of Fig. 3 conclude that the requirement for the increase of trust levels among the same individuals in one's social network increases if the size of the network increases. We are not sure about conspiracies in one's social network, but we hope that this increase in trust levels as well as the increase in network sizes may be a crucial reason to study various factors of several types of conspiracies or deceptions against one by the individuals in his social network.
In this paragraph, we analyze Fig. 4 (i-viii). From (i) and (ii) of Fig. 4, if = 2.1 and = 1 for a network of size = 150, then the trust cutoff is 0.235 to reach 50 individuals for both diffusion rates = 0.25 and = 0.50 to reach 50 individuals. Similarly, from (iii) and (iv) of Fig. 4, if = 500, then the trust cutoff is 0.26 to reach 150 individuals for = 0.25 and = 0.50. For (v) and (vi) of Fig. 4, if = 1500, then the trust cutoff is 0.50 to reach 150 individuals for = 0.25 and = 0.50. Again, for (vii) and (viii) of Fig. 4, if = 5000, then the trust cutoff is 0.79 to reach 150 individuals for = 0.25 and = 0.50. Thus, trust levels for the same 150 individuals increase with respect to an increase in the size of the network, independent of diffusion rate. Therefore, one cannot expect that information will be shared by the same individuals if his network size increases.
Now, we turn our attention to the power-law distribution of trust. For this purpose, we analyze Fig. 5 (i-iv). At the time of variations in the number of total informed people in one's network with size = 150, the trust cutoff increases to reach 50 individuals in (i) of Fig. 5 with a decrease in the value of the power-law exponent , where 2 < < 3. In (i) of Fig. 5, trust cutoffs are 0.175 and 0.240 for = 2.9 and = 2.1 respectively. Similar results can be observed for reaching 150 individuals for the same values of power-law exponents in (ii), (iii) and (iv) of Fig. 5 but with different sizes of networks. As observed from the simulations, we can infer that the trust cutoffs increase with a decrease in the power-law exponents , i.e., 1/(trust cutoff). Thus, we have the following conjecture from the above discussions.
Conjecture: Conspiracies or deceptions are caused in one's social network if he doesn't increase his levels of trust for the individuals in his network once the size of the network increases.
5. Comparison with online scenarios in Facebook
This section validates some of our above findings to recent experimental results related to Facebook. It is well-known that Facebook is an influential online social network. Barabási and Albert [3] showed that preferential attachment and growth play important roles to increase the sizes of the real world networks. Moreover, analysis of Facebook was done to detect community structure using 500 million users of Facebook in 2011 by Ferrara [63,64]. In 2012, Ferrara [63,64] showed the existence of power-law distribution for an emerging, well-defined community structure inside Facebook. Since, power-law exponent, i.e., exhibits the existence of hubs (of higher connections) in any real life network, thus obviously Facebook is one of the most suitable online social networks to validate our theoretical claims of previous section. Hence, , i.e., power-law exponent has several strong implications on Facebook. Using the same simulations as in the previous section, we have similar conclusions for the 500 million Facebook users [63,64]. We also found that there is a requirement for trust levels to increase among the same individuals in one's social network if the size of the network increases to 500 million users of Facebook. As a relation between the power-law exponent and the trust cutoff, it is found that 1/(trust cutoff). Moreover, we also find that trust levels never help to diffuse information quickly or vice versa to reach Dunbar's number 150, along with hierarchy layers of 5, 15, and 50 individuals in networks of different sizes. We provide Fig. 6 (i-ii) for the verification of our first conclusion of the previous section in case of 500 million users of Facebook. One can also obtain remaining conclusions of the previous section by using the same simulations for = 500 million user of Facebook.
Fig. 6.
(i) Diffusion of information among 500 million Facebook users with different trust cutoff values and = 0.25, (ii) diffusion of information among 500 million Facebook users with different trust cutoff values and = 0.50.
In 2014, Kraut and Foire [65] surveyed 472,231 online groups on Facebook to find out about their success and survival. They found that 57% of these groups ceased to create new content by the end of three months. Groups were found to be less likely to survive when the founders had more ties to group members, when the founders invited more of the members, and when the founders served as exclusive administrators of the group. Moreover, larger groups that engaged founders vigorously in the first week of their formation were found to be more successful than smaller and less engaging groups. Similar to above, Ma et al. [66] surveyed on 6383 users of Facebook groups about their trust attitudes. Following aggregated behavioral and demographic data for group members, they concluded that (1) an individual's propensity to trust is associated with how they trust their groups, (2) groups that are smaller, closed, older, more exclusive, or more homogeneous are trusted more; and (3) trust can be predicted from a group's overall friendship-network structure and an individual's position within the structure. In Ref. [66], Ma et al. also found that trust decreases with the increase of the group size. But interestingly, they confirmed the finding of Kraut and Foire [65] that trust grows with the number of group administrators. They also suggested that less popular groups should be recommended to individuals because of the diversity of group recommendations, trust, and user satisfaction. Ma et al. [66] did not find any significant difference between closed and secret groups once the groups' sizes crossed Dunbar's number 150. Moreover, Cohen and Havlin [67] and Guardiola et al. [68] found that trust cooperation networks possess scale-free degree distributions. Since, power-law exponent has major roles in scale-free degree distributions, thus combining results from Refs. [[63], [64], [65], [66], [67]] it can be understood that our result 1/(trust cutoff) may have a significant impact on online and offline social networks. Again, preferential attachments are present in scale-free networks [3,69], so it can be found that the number of individuals with lower trust cutoffs in an individual's social network increases if the size of his social network grows. So, the trust cutoff is not absolute in social networks. Hence, one must increase trust cutoffs for individuals in his social network if he doesn't want to be deceived by members of his social network due to the increase in its size. Thus, we have found that one of our predictions is matched to one of the data based observations related to trust and the size of social networks made by Ma et al. [66]. Thus, we hope to have more rigorous data based results to validate our claims.
6. Limitations and scopes of further research
In the case of one's social network, we showed that 1/(trust cutoff). To validate our theoretical claims, we considered some related studies on Facebook and trust. However, like Facebook, analyses of networks on Twitter also confirmed Dunbar's number as the cognitive limit of Twitter users [70]. Recently Lu et al. [71] analyzed 50000 tweets from April 30 to May 30 of 2013. A total of 42180 participants were covered in this analysis. They concluded that the frequency distribution of retweeting is described by a power-law, with ranges from 0.6 to 0.7. Similarly, Aparicio et al. [72] also confirmed power-law distribution in the case of Twitter. Hentschel et al. [73] identified trust worthy users on Twitter. But, no such analysis on Twitter, best to our knowledge, has been done so far like Facebook [65,66]. Thus, it limits us to validate our results on social networks like Twitter at present. Thus, we hope to have suitable experimental validations of our results for Twitter and other social networks in future.
7. Conclusion
In this paper, we tried to find out how various types of trust distributions like uniform and power-law in a network result in information diffusion, as well as how the changes in the sizes of the networks make a difference in the trust level of individuals to keep the same number of friends in each hierarchy. We found that trust levels remain the same irrespective of the information diffusion rate in a network. It means that trust level never helps to diffuse information quickly or vice versa to reach Dunbar's number 150 along with hierarchy layers of 5, 15, and 50 individuals in networks of different sizes. Moreover, the requirement for trust levels among the same individuals in one's social network increases if the size of his social network increases. Also, trust levels for the same 150 individuals increase with respect to an increase in the size of the network, independent of diffusion rate. Therefore, one cannot expect that information will be shared by the same individuals if his network size increases. We can infer that the trust cutoffs increase with a decrease in the power-law exponents , i.e., 1/(trust cutoff). However, we are not sure whether conspiracies or deceptions are caused in one's social network if he doesn't increase his levels of trust in individuals in his social network once the size of his social network increases. Thus, we proposed it as a conjecture to the readers for validation. Moreover, we hope that to find topological connections [74,75] of this paper in social networks in future.
8. Open questions
In this section, we propose the following questions that are open in the common research domain of social networks, Dunbar's number, and trust.
Q.1. What are the topological and graph theoretical features of one's social network in terms of centrality, connectivity, etc. If trust levels are increased among members of the network?
Q.2. Are there hidden mathematical and social network patterns in wars that were/are caused by deception?
Author contribution statement
Santanu Acharjee: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Wrote the paper.
Akhil Thomas Panicker: Performed the experiments; Contributed reagents, materials, analysis tools or data; Wrote the paper.
Data availability statement
No data was used for the research described in the article.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgement
This research was supported in part by the International Centre for Theoretical Sciences (ICTS) during the participation of both the authors for participating in the program-Summer research program on Dynamics of Complex Systems (Code: ICTS/Prog-DCS2019/07). The authors are thankful to Prof. Robin I. M. Dunbar of the University of Oxford for his constructive discussions during the preparation of this paper.
Footnotes
Supplementary data to this article can be found online at https://doi.org/10.1016/j.heliyon.2023.e19850.
Contributor Information
Santanu Acharjee, Email: sacharjee326@gmail.com.
Akhil Thomas Panicker, Email: atpanicker95@gmail.com.
Appendix A. Supplementary data
The following is the Supplementary data to this article:
References
- 1.Barnes J.A. Graph theory and social networks: a technical comment on connectedness and connectivity. Sociology. 1969;3(2):215–232. [Google Scholar]
- 2.Erdös P., Rényi A. On the evolution of random graphs. Publ. Math. Inst. Hungar. Acad. Sci. 1960;5:17–61. [Google Scholar]
- 3.Barab asi A.L., Albert R. Emergence of scaling in random networks. Science. 1999;286(5439):509–512. doi: 10.1126/science.286.5439.509. [DOI] [PubMed] [Google Scholar]
- 4.Barnes J.A., Harary F. Graph theory in network analysis. Soc. Netw. 1983;5(2):235–244. [Google Scholar]
- 5.Hage P., Harary F. Eccentricity and centrality in networks. Soc. Netw. 1995;17(1):57–63. [Google Scholar]
- 6.Hsiao P.N. In: Complex Sciences. Complex 2009. Zhou J., editor. vol. 5. Springer; Berlin, Heidelberg: 2009. A social network model based on topology vision. (Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering). [Google Scholar]
- 7.Salem S., Banitaan S., Aljarah I., Brewer J.E., Alroobi R. Discovering communities in social networks using topology and attributes. Proceedings of 10th International Conference on Machine Learning and Applications. 2011;1:40–43. [Google Scholar]
- 8.Chung K.S.K., Piraveenan M., Hossain L. In: Encyclopedia of Social Network Analysis and Mining. Alhajj R., Rokne J., editors. Springer; New York: 2014. Topology of online social networks. [Google Scholar]
- 9.Juszczyszyn K., Kazienko P., Musiał K. In: Lovrek I., Howlett R.J., Jain L.C., editors. vol. 5178. Springer; Berlin, Heidelberg: 2008. Local topology of social network based on motif analysis. (Knowledge-Based Intelligent Information and Engineering Systems. KES 2008. Lecture Notes in Computer Science). [Google Scholar]
- 10.Ferrara E., Fiumara G. Topological features of online social networks. Commun. Appl. Ind. Math. 2012:20. doi: 10.1685/journal.caim.381. [DOI] [Google Scholar]
- 11.Waumans M.C., Nicodème T., Bersini H. Topology analysis of social networks extracted from literature. PLoS One. 2015;10(6) doi: 10.1371/journal.pone.0126470. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Rhodes C.J., Keefe E.M.J. Social network topology: a Bayesian approach. Jour. Oper. Res. Soc. 2007;58(12):1605–1611. [Google Scholar]
- 13.Singh L., Zhan J. Measuring topological anonymity in social networks. Proceedings of IEEE International Conference on Granular Computing. 2007;2007:770–774. [Google Scholar]
- 14.Acharjee S., Bora B., Dunbar R.I.M. On M-polynomials of dunbar graphs in social networks. Symmetry. 2020;12(6):932. [Google Scholar]
- 15.Memon N., Alhajj R. In: From Sociology to Computing in Social Networks. Memon N., Alhajj R., editors. Springer; Vienna: 2010. Social networks: a powerful model for serving a wide range of domains. [Google Scholar]
- 16.Brown S.L., Brown R.M. Selective investment theory: recasting the functional significance of close relationships. Psy. Inq. 2006;17:1–29. [Google Scholar]
- 17.Sutcliffe A.G., Wang D., Dunbar R.I.M. Modelling the role of trust in social relationships. ACM Trans. Int. Tech. 2015;15(4):16. 1-16:24. [Google Scholar]
- 18.Mitchell P. Arnold; London: 1997. Introduction to Theory of Mind: Children, Austism and Apes. [Google Scholar]
- 19.Rotter J.B. Generalized expectancies for interpersonal trust. Amer. Psy. 1971;26(5):443–452. [Google Scholar]
- 20.Vrij A. John Wiley and Sons; New York: 2001. Detecting Lies and Deceit: the Psychology of Lying and the Implications of Professional Practices. [Google Scholar]
- 21.Hays R.B. A longitudinal study of friendship development. Jour. Person. Soc. Psy. 1985;48(4):909–924. doi: 10.1037//0022-3514.48.4.909. [DOI] [PubMed] [Google Scholar]
- 22.Granovetter M. The strength of weak ties. Am. J. Sociol. 1973;78:1360–1380. [Google Scholar]
- 23.Baumeister R.F., Leary M.R. The need to belong: desire for interpersonal attachments as a fundamental human motivation. Psy. Bull. 1995;117(3):497–529. [PubMed] [Google Scholar]
- 24.K.A. McCabe, A cognitive theory of reciprocal exchange, In E. Ostrom and J. Walkers (Eds), Trust and Reciprocity: Interdisciplinary Lessons from Experimental Research, New York, Russel Sage Foundation, 147-169.
- 25.Hays R.B. The day-to-day functioning of close versus casual relationships. Jour. Soc. Per. Relation. 1989;6:21–37. [Google Scholar]
- 26.Hays R.B. The development and maintenance of friendship. J. Soc. Pers. Relat. 1984;1(1):75–98. [Google Scholar]
- 27.Sutcliffe A., Dunbar R., Binder J., Arrow H. Relationships and the social brain: integrating psychological and evolutionary perspectives. Br. J. Psychiatry. 2012;103:149–168. doi: 10.1111/j.2044-8295.2011.02061.x. [DOI] [PubMed] [Google Scholar]
- 28.Roberts S.G.B., Renwick J.S. vol. 270. Proc. Royal Soc. London; 2003. pp. 2279–2283. (The Development of Cooperative Relationships: an Experiment). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Nowak M.A., Sigmund K. Evolution of indirect reciprocity. Nature. 2005;437:1291–1298. doi: 10.1038/nature04131. [DOI] [PubMed] [Google Scholar]
- 30.Falcone R., Castelfranchi C. In: Trust and Deception in Virtual Societies. Castelfranchi C., Tan Y., editors. Kulwer Academic Publishers; Boston: 2001. Social trust: a cognitive approach; pp. 55–90. [Google Scholar]
- 31.Falcone R., Castelfranchi C. In: Trust in Cybersocieties: Integrating the Human and Artificial Perspectives. Falcone R., et al., editors. Springer; Berlin: 2001. The socio-cognitive dynamics of trust: does trust create trust? pp. 55–72. 2001. [Google Scholar]
- 32.Castelfranchi C., Falcone R. John Wiley; Chichester: 2010. Trust Theory: A Socio-Cognitive and Computational Model. [Google Scholar]
- 33.Nowak M.A., Sigmund K. Evolution of indirect reciprocity by image scoring. Nature. 1998;393:573–577. doi: 10.1038/31225. [DOI] [PubMed] [Google Scholar]
- 34.Suicliffe A., Wang D. Computational modelling of trust and social relationships. J. Artif. Soc. Soc. Simulat. 2012;15:3. [Google Scholar]
- 35.Sabatini F., Sarracino F. Online social networks and trust. Soc. Indicat. Res. 2019;142:229–260. [Google Scholar]
- 36.Gneezy U. Deception: the role of consequences, the American Eco. Rev. 2005;95(1):384–394. [Google Scholar]
- 37.Akerlof George A. The market of “Lemons”: quality uncertainty and the market mechanism. Q. J. Econ. 1970;84(3):488–500. [Google Scholar]
- 38.Holmstrom B. Moral hazard and observability. Bell J. Econ. 1979;10(1):74–91. [Google Scholar]
- 39.Bentham J. Clarendon Press; Oxford: 1789. An Introduction to the Principles of Morals and Legislation. [Google Scholar]
- 40.Bok S. Vintage Books; New York: 1978. Lying: Moral Choices in Public and Private Life. [Google Scholar]
- 41.L ñnguez G., Govezensky T., Dunbar R., Kaski K., Barrio R.A. Effects of deception in social networks. Proc. R. Soc. A B. 2014;281 doi: 10.1098/rspb.2014.1195. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Serota K.B., Levine T.R., Boster F.J. The prevalence of lying in America: three studies of self-reported lies. Hum. Commun. Res. 2010;36:2–25. [Google Scholar]
- 43.DePaulo B.M., Kashy D.A., Kirkendol S.E., Wyer M.M., Epstein J.A. Lying in everyday life, Jour. Pers. Soc. Psy. 1996;70:979–995. [PubMed] [Google Scholar]
- 44.Hancock J.T., Thom-Santeli J., Ritchie T. Deception and design: the impact of communication technology on lying behavior. CHI Lett. 2004;6:130–136. [Google Scholar]
- 45.Abeler J., Becker A., Falk A. Truth-telling: a representative assignment. Inst. Study Labor. 2012;6919:1–18. [Google Scholar]
- 46.Jensen L.A., Arnett J.J., Feldman S.S., Cauffman E. The right to do wrong: lying to parents among adolescents and emerging adults. Jour. Youth Adolsec. 2004;33(2):101–112. [Google Scholar]
- 47.Gino F., Pierce L. Dishonesty in the name of equity. Psy. Sci. 2009;20(9):634–644. doi: 10.1111/j.1467-9280.2009.02421.x. [DOI] [PubMed] [Google Scholar]
- 48.Arnaboldi V., Conti M., Passarella A., M Dunbar R.I. vol. 1. Online Soc. Net. Media; 2017. pp. 44–55. (Online Social Networks and Information Diffusion: the Role of Ego). [Google Scholar]
- 49.Wang C.S., Galinsky A.D., Murnighan J.K. Bad drives psychological reactions, but good propels behavior - responses to honesty and deception. Psy. Sci. 2008;20(5):634–644. doi: 10.1111/j.1467-9280.2009.02344.x. [DOI] [PubMed] [Google Scholar]
- 50.Panasiti M.S., Pavone E.F., Merla A., Aglioti S.M. Situational and dispositional determinants of intentional deceiving. PLoS One. 2011;6(4) doi: 10.1371/journal.pone.0019465. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Gino F., Ayalb S., Ariely D. Self-serving altruism? The lure of unethical actions that benefit others. Jour. Eco. Beh. Org. 2013;93:285–292. doi: 10.1016/j.jebo.2013.04.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Milgram S. The small-world problem. Psy. Today. 1967;1:61–67. [Google Scholar]
- 53.Travers J., Milgram S. An experimental study of the small-world problem. Sociometry. 1969;32:425–443. [Google Scholar]
- 54.Dunbar R.I.M. Co-evolution of neocortex size, group size and language in humans. Behav. Brain Sci. 1993;16:681–734. [Google Scholar]
- 55.MacCarron P., Kaski K., Dunbar R. Calling Dunbar's number. Soc. Netw. 2016;47:151–155. [Google Scholar]
- 56.Dunbar R.I.M. The anatomy of friendship. Trends Cognit. Sci. 2018;22:32–50. doi: 10.1016/j.tics.2017.10.004. [DOI] [PubMed] [Google Scholar]
- 57.Hill R.A., Dunbar R.I.M. Social network size in humans. Hum. Nat. 2003;24:53–72. doi: 10.1007/s12110-003-1016-y. [DOI] [PubMed] [Google Scholar]
- 58.Zhou W.X., Sornette D., Hill R.A., Dunbar R.I.M. Discrete hierarchical organization of social group sizes. Proc. R. Soc. A B. 2005;272:439–444. doi: 10.1098/rspb.2004.2970. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Stiller J., Dunbar R.I.M. Perspective-taking and memory capacity predict social network size. Soc. Network. 2007;29:93–104. [Google Scholar]
- 60.Dunbar R.I.M. Constraints on the evolution of social institutions and their implications for information flow. Jour. Inst. Econ. 2011;7(3):345–371. [Google Scholar]
- 61.Dunbar R.I.M. Structure and function in human and primate social networks: implications for diffusion, network stability and health. Proc. R. Soc.A. 2020;476 doi: 10.1098/rspa.2020.0446. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62.Redner S. How popular is your paper? An empirical study of the citation distribution. The Euro. Phys. Jour. B. 1998;4:131–134. [Google Scholar]
- 63.Ferrara E. Community structure discovery in facebook. Int. J. Soc. Netw. Min. 2012;1(1):67–90. [Google Scholar]
- 64.Ferrara E. A large-scale community structure analysis in Facebook. EPJ Data Science. 2012;1(1):1–30. [Google Scholar]
- 65.Kraut R.E., Foire A.T. The role of founders in building online groups, CSCW '14: proceedings of the 17th ACM conference on Computer supported cooperative work & social computing. February. 2014:722–732. 2014. [Google Scholar]
- 66.Ma X., Cheng J., Iyer S., Naaman M. 2019. When Do People Trust Their Social Groups?, CHI '19: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems; pp. 1–12. May 2019. [Google Scholar]
- 67.Cohen R., Havlin S. Scale-free networks arew ultrasmall, \textit. Phys. Rev. Letts. 2003;90(5):58701–58704. doi: 10.1103/PhysRevLett.90.058701. [DOI] [PubMed] [Google Scholar]
- 68.Guardiola X., Guimera R., Arenas A., Diaz-Guilera A., Streib D., Amaral L.A.N. Macro- and macro-structure of trust netwroks. https://arxiv.org/abs/cond-mat/0206240
- 69.Muchnik L., Pei S., Parra L.C., Reis S.D.S., Andrade J.S., Jr., Havlin S., Makse H.A. Origins of power-law degree distribution in the heterogeneity of human activity in social networks. Sci. Rep. 2013;3:1783. doi: 10.1038/srep01783. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Gonçalves B., Perra N., Vespignani A. Modeling users' activity on twitter networks: validation of dunbar's number. PLoS One. 2011;6(8) doi: 10.1371/journal.pone.0022656. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 71.Lu Y., Zhang P., Cao Y., Hu Y., Guo L. On the frequency distribution of retweets. Procedia Computer Science. 2014;31:747–753. [Google Scholar]
- 72.Aparicio S., Villazón-Terrazas J., Álvarez G. A model for scale-free networks: application to twitter. Entropy. 2015;17(8):5848–5867. [Google Scholar]
- 73.Hentschel M., Alonso O., Counts S., Kandylas V. Finding users we trust: scaling up verified Twitter users using their communication patterns. Proceedings of the International AAAI Conference on Web and Social Media. 2014;8(1):591–594. [Google Scholar]
- 74.Berry J.W., Phillips C.A., Saia J. Making social networks more human: a topological approach. Stat. Anal. Data Min. 2019;12(6):449–464. [Google Scholar]
- 75.Mossel E., Sly A., Tamuz O. Strategic learning and the topology of social networks. Econometrica. 2015;83(5):1755–1794. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
No data was used for the research described in the article.