How users’ participation affects reputation management systems: The case of P2P networks

https://doi.org/10.1016/j.simpat.2010.05.010Get rights and content

Abstract

In this work human factor is explored by means of agent based simulation and analyzed in the framework of a reputation management system (RMS), within a peer-to-peer (P2P) network. Reputation is about evaluating an agent’s actions and other agents’ opinions about those actions, reporting on those actions and opinions, and reacting to that report, thus creating a feedback loop. This social mechanism has been successfully used to classify agents within normative systems. The systems rely on the feedbacks given by the members of the social network in which the RMS operates. Reputation can thus be seen as an endogenous and self produced indicator, created by the users for the users’ benefit. This implies that users’ participation and collaboration is a key factor for the effectiveness a RMS.

Introduction

The purpose of this work is to show, by means of simulation, that in such systems based on social networks and, in general, social links an recommendations, individuals’ effective participation in the process of assigning a “reputation value” to others is a key factor for the well being of the system itself. This is particularly true when the system is a wide one, in which few agents actually know each other and hence have to rely on others’ opinions about the unknown subjects whom they deal with.

In everyday’s life, when a choice subject to limited resources (like for instance money, time, and so on) must be done, due to the overwhelming number of possibilities that people have to choose from, something is needed to help them in making choices. People often follow the advice of others when it comes to which products to buy, which movies to watch, which music to listen, which websites to visit, and so on. This is a social attitude that uses others’ experience. The people usually base their decision about whether or not following the received advices partially upon the other person’s reputation in helping to find reliable and useful information, even with all the noise.

Using and building upon early collaboration filtering techniques, reputation management software gather ratings for people, companies, and information sources. Since this is a distributed way of computing reputation, it is implicitly founded on two main assumptions:

  • (1)

    The correctness of shared information.

  • (2)

    The participation of users to the system.

While the negation of the first could be considered as an attack to the system itself, performed by users trying to crash it, and its occurrence is quite rare, the second factor is often underestimated, when designing a collaborative RMS. Users without a vision of the macro level often use the system, but simply forget to collaborate, since this seems to cause a waste of time.

In this work a qualitative and, when possible, quantitative evaluation of the collaborative factor in RMSs is given, by means of an empirical analysis conducted via an agent based simulation. Thus, the main research question is: what’s the effectiveness of a RMS, when changing the collaboration rate coming from the involved users?

In order to answer this question, in the paper an agent based model is introduced, representing a peer-to-peer (P2P) network for file sharing. A basic RMS is applied to the system, in order to help users to choose the best peers to download from. In fact, some of the peers are malicious, and they try to exploit the way in which the P2P system rewards users for sharing files, by uploading inauthentic resources when they do not own the real ones. The model is described in detail and the results are evaluated through a multi-run ceteris paribus technique, in which only one setting is changed at a time. In particular, the most important parameters which will be compared, to evaluate the effectiveness of the RMS are: verification of the files, performed by the users and negative payoff, given in case a resource is reported as being inauthentic. The verification of the files, i.e. users’ the collaboration, is an exogenous and not controllable factor for the RMS, while the negative payoff is an endogenous and thus directly controllable variable, from the point of view of a RMS’s designer.

The P2P framework has been chosen since there are many works focusing on the reputation as a system to overcome the issue of inauthentic files, but, when evaluating the effectiveness of the system, the authors [8] usually refer to idealized situations, in which users always verify the files for authenticity, as soon as they start a download. This is obviously not the case in the real world: first of all, most resources require to be at least partially owned, in order to be checked. Besides, some users could simply decide not to check them for long time. Even worse, other users could simply forget about a downloaded resource and never check it. Last but not least, other users might verify it, but simply not report anything, if it’s not authentic.

Even if this work uses P2P networks as a scenario, the authors’ goal is not that of proposing a new RMS, or improving an existing algorithm for reputation management.

Besides, this paper deals with “normal” situations, i.e.: those happening in the majority of cases. Thus, deliberate attacks to a RMS would not be considered, and the users are supposed to give a right opinion, if and when they decide to share it with others. That “if and when”, and how much it affects the effectiveness of a RMS is indeed the main topic of this research.

Section snippets

Reputation and P2P systems

A reputation system is a type of collaborative filtering algorithm which attempts to determine ratings for a collection of entities, given a collection of opinions that those entities hold about each other.

This is similar to a recommendation system, but with the purpose of entities recommending each other, rather than some external set of entities (such as books, movies, or music). The role of reputation systems is to facilitate trust [13], and often functions by making the reputation more

Possible limits of RMS

RMSs are proven, via simulation, to make P2P networks safe from attacks by malicious peers, even when forming coalitions. In networks of millions of peers attacks are less frequent, but users still have a benefit from sharing inauthentic files. It’s not clear if RMSs can be effective against this selfish widespread misbehavior, since they make several ideal assumptions about the behavior of peers who have to verify files to discover inauthentic ones. This operation is assumed to be automatic

Model framework

We assume a simple idealized model of reputation, since the objective is not to prove the effectiveness of a particular algorithm but to study the effect of users’ behavior on a reputation system. We use a centralized system which assumes the correctness of information provided by users, e.g., it is not possible to give an evaluation of a user with whom there was no interaction. When verifying a file, the agents give a negative payoff to the agent uploading it, in case it’s inauthentic. In

Model specifications and parameters

The P2P network is modeled as an undirected and non-reflexive graph. Each node is an agent, representing a P2P user. Agents are reactive: their behavior is thus determined a priori, and the strategies are the result of the stimuli coming from the environment and of the condition-action rules. Their behavior is illustrated in next section. Formally the multi agent system is defined as MAS = <Ag; Rel>, with Ag set of nodes and Rel set of edges. Each edge among two nodes is a link among the agents

Results

The simulation goes on until at least one goal exists and/or a download is still in progress. In Table 1 a summary of the most important parameters for the experiments is given. These are configurable by the user and can be changed separately.

In the described experiments, the parameters shown in Table 1 are fixed, while those listed in Table 2 change (according to the given experiment).

A crucial index, defining the wellbeing of the P2P system, is the ratio among the number of inauthentic

Whitewashing

A “whitewashing” mode is implemented and selectable before the simulation starts, in order to simulate the real behavior of some P2P users who, realizing that they cannot download anymore (since they have low credits or, in this case, bad reputation), disconnect their client, and then connect again, so to start from the initial pool of credits/reputation. When this mode is active, at the beginning of each turn all the agents that are under a given threshold reset it to the initial value,

Scaling issues

When changing the numbers of agents, agent based models could suffer from scaling issues, meaning that the results obtained with a certain number of agents might not hold when this number changes significantly. To verify this, two more experiments were carried on. The number of agents is increased to 150 (three times the previous value). Coherently, the number of edges, the initial pool of resources and the number of resources introduced at each turn are also tripled. The trend is very similar:

Conclusion and outlook

In this article the problem of human factor and social participation has been applied to P2P systems, but it could easily be extended to other virtual societies and, to a certain extent, also to real (not virtual) social networks, e.g., enterprise networks, where trust and individual reputation represent central issues for the formation of new links and the maintenance of the existing ones.

The main purpose of the work was in fact to show, by means of an empirical analysis based on simulation,

Acknowledgements

This work has been partially funded by the project ICT 4LAW, financed by Regione Piemonte. The authors gratefully acknowledge Mr. Gianluca Tornese for his help with the technical implementation of the model.

References (17)

  • A. Jøsang et al.

    A survey of trust and reputation systems for online service provision

    Decision Support Systems

    (2007)
  • A. Abdul-Rahman, S. Hailes. Supporting trust in virtual communities, in: Proceedings of the Hawaii International...
  • S. Buchegger, J.Y. Le Boudec, A robust reputation system for P2P and mobile ad-hoc networks, in: second Workshop on the...
  • V. Cahill, B. Shand, E. Gray, et al. Using trust for secure collaboration in uncertain environments, Pervasive...
  • M. Carbone, M. Nielsen, V. Sassone, A formal model for trust in dynamic networks, in: Proceedings of International...
  • R. Dingledine et al.

    Accountability measures for peer to peer systems

  • M. Gupta, P. Judge, M. Ammar, A reputation system for peer-to-peer networks, in: Proceedings of the 13th International...
  • A. Jøsang, R. Ismail, The beta reputation system, in: Proceedings of the 15th Bled Electronic Commerce Conference, June...
There are more references available in the full text version of this article.

Cited by (10)

  • Modeling the effect of the badges gamification mechanism on personality traits of Stack Overflow users

    2020, Simulation Modelling Practice and Theory
    Citation Excerpt :

    Gamification mechanisms have been used in a plethora of sectors and applications to support the productivity and participation of humans. For example, [9] proposed an agent-based simulation of a reputation management system, within a peer-to-peer network, or [10] studied how gamification mechanisms enhance learning and education. Our work shares similarities with other theoretical and empirical studies which analyze the impact of gamification mechanisms, such as reputation, bounties and badges award processes on user behavior.

  • Online reputation management for improving marketing by using a hybrid MCDM model

    2012, Knowledge-Based Systems
    Citation Excerpt :

    This paper shows the specific process of carrying out ORM, the influential weights of criteria for ORM, and evaluating the performances in professional services of marketing under the full consideration of ORM by a hybrid multiple criteria decision making (MCDM) model. Previous studies regarding about ORM focused on considering solely what factors would influence online reputation [1,22,35] and reputation management [6,9,23]. However, decision making needs consideration of multiple criteria, which are interdependent and feedback in real world.

  • Modelling user participation in organisations as networks

    2012, Expert Systems with Applications
    Citation Excerpt :

    Consequently, UP research continues to be dominated by studies of systems development, a trend earlier observed by McKeen and Guimaraes (1997). Nonetheless current IS research are now exploring novel areas of study such as online communities (Jarvenpaa & Majchrzak, 2010; Remondino & Boella, 2010; Zhou, 2011), IS security risk management (Spears & Barki, 2010), and ICT use in social care (Webb, 2008). UP has also been studied in areas such as fisheries management (Nielsen & Vedsmand, 1999) and watershed management (Johnson, Ravnborg, Westermann, & Probst, 2001).

  • A multidimensional contribution-based service differentiation scheme for P2P Systems

    2017, Proceedings - 2017 4th International Conference on Information Science and Control Engineering, ICISCE 2017
View all citing articles on Scopus
View full text