Trusted Communities

Trusted communities: the only way one can effectively combat cheating

The problem with multiplayer

Multiplayer games have the problem that for performance reasons, some of the code and data (in many cases, MOST of the code and data) has to run on the client. The client is an untrusted environment. No amount of protection is ever going to be completely secure against tampering on the client by an experienced person. Open source multiplayer games have the additional problem that a lot of information is made available about the inner workings of the client, and make subtle modifications that give an advantage terribly easy. For some games like Cube, the problem is even bigger because almost all of the gameplay is clientside, and thus “godmode” is only a single line of code away. So protecting the client appears hopeless. But there is another approach: make players identify themselves, and thus be able to hold them accountable when they cheat. The big problem here that there is no such thing as “identity” on the internet, any system of names/accounts allows cheaters to simply reappear under a new name when caught.Commercial games have found a solution that works to some extend: supply a unique key with every sold copy of the game, which allows identification. It doesn’t really create “identity”, but at least it makes identity cost $50, which is the cost of a new game/key when you are caught cheating. This is not a great solution however, since admins have to be very sure someone is cheating before they can “rob” them of their game, and for some cheaters the $50 may be worth the risk still. And most importantly, this approach doesn’t work for open source games.So the issue boils down to: how does one establish an “identity” on the net which is not easily forgable/renawable, and which is feasible organisationally. Enter: trusted communities.

A community controlled “identity”

The idea is simple and already exists in the form of social communities like Orkut: have the usual community site where people can have accounts with nicknames etc, but with one exception: you can only join if you are invited by someone already part of the community. Or to be more specific: every community member specifies who else he trusts in this community. Your number of “friends” cannot be less than 1.Besides “trust”, another relationship members may have is “not trust”, which essentially means you think it is highly likely the person in question is a cheater. If you are not sure whether someone is a cheater, simply not not having them in your “trust” list is a good start. Every person gets assigned a score, which is computed by some magical formula taking into account your “trust” and “not trust” relations. Servers may now require a minimal score for you to be able to enter the server, so cheaters are automatically denied from most servers as people start distrusting them.The crucial element is that “identity” now has a cost (much like the $50 above): to get to play on the servers with trusted people, you need get to know people in the community to give you trust. This is a fair bit of work, and definitely increasingly hard to do if you repeatedly ruin your reputation. Players would be careful at giving trust, because being giving trust to random people only has disadvantages (being associated with cheaters/low-trust people reflects bad upon you as well).The work of proving identity is spread out over the community, and no centralized administration for registering users and finding cheaters is needed. This is what makes the idea feasible in the first place for open source games.A system like this would however not likely survive without some form of user intervention however. The easiest way would be for the game makers to occasionally appoint users with the highest scores “moderator” rights. Moderators would manually correct “trust” issues. A common issue one can predict in the basic system above is that as soon as people get “not trust”-ed, counter “not trust”-ing will ensue for no other reason than self defense. Moderators can in such cases remove unjustified “not trust” relations, and/or add a third kind of relation, the “false accusation”, which is also taken into account for scores (and a good deterrant, so people think twice before they acuse someone out of rage). The idea here is that moderators help stop things escalating into “not trust” wars between clans and such. A moderator can also easily regognise whenever a group of cheaters try to beat the score formula by having a ring a of friends all trust eachother.People who noone in the community yet can still play: they will just have a default score and thus have a limited selection of servers available. Enough to try the game, but the urge to “upgrade” will exist.

Implementation

Implementing a website which implements the above community functionality is 95% of the work. On the game side, things are really simple: the player sets a key which it receives from registering with the community in his game configuration. The game sends the key to the server, which in turn sends it to the masterserver, which returns the current score for the player. If the score is too low for the server setting, the player is denied access.“not trust” commands can be built into the game client, but this is not necessary, and maybe not even desirable.In this scheme, servers have to be trusted, because otherwise the system would be vulnerable to key-mining. Servers can be made untrusted by encrypting the key from client to masterserver, or by having the client communicate the key to the master directly at the same time.

Conclusion

This system makes combatting cheating cheaper (on average, per person) than cheating, and is decentralized. This is the crucial reason why it can be succesful.

Decentralized trust

The above system is nice, but it also requires a lot of centralized infrastructure and administration. Here is an alternative system for when you want a trust system, but fully client side administrated:

The main idea is that each player will generate an ID to identify each other player he ever encounters in-game, and will use this to assign trust levels. These trust levels of all players already in a game can then be used, depending on server settings, to allow new players into a server.

How does it work? Each client has a simple flat database of all players he has ever encountered, stored and managed locally only. This DB (can be a simple text file) contains per known player:

  • The players nickname. These are obviously not unique, nor can they reserved/protected, but that is ok in this system.
  • A tag. Can be set to further info in case there are 2 or more the same nicknames in the DB
  • Their_ID. An ID you generate to represent them. An ID can just be a 32bit or 64bit random number generated clientside. Clashes are extremely unlikely since identity is defined by name+ID, not just ID.
  • My_ID. An ID generated by them to represent you.
  • Trust. A measure of trust you assign to this person.
  • Stats. Any number of statistics gathered about your games with this player. These serve to further help identify people, but also are just plain interesting in the long run. For example:
    • Frags: number of times you killed this player
    • Deaths: number of times this player has killed you
    • Time: number of minutes spent in games together

Additionally, each client has a single global ID, which is just used for name-collision checks in the initial handshake.

So when you encounter a new player on the server, your game client generates a new ID for him, and sends it to him. He will do the same for you. Say his name is Fred. You play together, and he appears a nice, trustworthy player. You’d trust him next time you see him.

So next time you are on a server, a player named Fred joins. He already appears to have the ID you generated for him, and sends it to you. Your game client notes that you already know this player, and prints his stats. You are pretty sure its the Fred from last time, and you have another good game together. You assign him a higher trust level using the in-game menus which gets saved to the local DB.

Now you are again on the server, and “Fred” joins. He however does not have the ID you gave him, so even though his name is the same, he shows up as an unknown player in the DB, and the game client tags him with a number so as to differentiate him from any other Freds. He shows up on your screen as Fred(2), with no stats. It is now up to you to play with this guy or not. He may simply turn out to be a different guy, or a cheater. In either case, you can change his tag of “2” to something more telling, to help you differrentiate him from your friend Fred.

So this system allows you to establish identity that is not susceptible to nickname clashes and nickname impersonations. It promotes people to stay with one nickname, which is useful, and prevents the plague in online games that is “aliasing”.

But we can do more. The server can have a setting that determines wether new people are allowed to join a server based on the combined trust levels of all players already on the server. This allows you to go from very conservative, friends only, scared of cheaters kind of scenario to all open, welcome newbies kind of setting.

First, you can assign people that you frequently play with trust levels, to make it even easier to play with them. Suggested trust levels:

  • 0: the default. In most cases its not needed to assign trust as the IDs + stats already make it clear enough wether to let someone in.
  • 1: played with this person quite a few times and appears to be a normal player, probably not a cheater
  • 2: know this person well, certainly not a cheater.
  • -1: saw some behaviour from him thats unlikely to be legal, but I am not 100% sure he is cheating
  • -2: 100% sure a cheater.

It may not always have any effect to assign negative scores, since cheaters will want to assume a new identity every time. But just incase a cheater is dumb enough to repeatedly want to cheat in your games under the same identity, this can help. Remember that this system is for creating identity of people that you know, not for everyone. It is in each players interest to try and build up an identity in the community and therefore be trusted.

Now, using trust levels, we can do things like add up all trust scores for a player trying to connect from all players already on the server, and setting server rules based on that. Additionally we can factor in the total stats into the scores, i.e. a player that has already played 10 combined hours with players on the server is more likely trustworthy than someone with 0 hours. When a player tries to connect, it may show something like “Fred is trying to connect, total trust score 5 (you: 1), hrs player 10 (you 3)“. Example server rules may be:

  • let anyone in
  • let anyone in with no negative trust
  • let someone in who is trusted by at least one player
  • let someone in with a total score of at least N
  • let someone in voted “yes” by 1 person
  • let someone in with majority vote
  • only let players in on majority vote
  • only let players in trusted by everyone

ID exchanges happen when a new player is connected to a server, but before he is allowed to send or receive any normal game packets. This ensures that no matter how badly a client is hacked, if the players don’t let him in, he can do nothing (bar maybe DoS) to disrupt the game.

Now in an Open Source game where also the server is Open Source, there is the issue that you can’t really trust the server either. Once thing a malicious server could attempt to do is ID harvesting, since its the only one that gets to see all IDs. This will in most cases not be effective however, since you need ALL of a players IDs to succesfully impersonate him. If you only have some (because he plays on other servers too), that means that the impersonator may get on a server where some people trust him, and others don’t, which may uncover him. If harvesting did become an issue, p2p ID exchange could be used, but I don’t think its necessary.

As said before, this system helps establish identity and trust, but it has the same problem of all systems of its kind: it can’t differentiate newbies from cheaters easily. This is a problem that cannot be solved. Existing players will have to strike a balance between playing “safe” games and helping newbies join the community. Newbies will have a slightly harder time getting started if they are not allowed in certain games, but this is only temporary and will only put off the most casual of players. It is not ideal, but as a very simple social solution to cheating, it comes close enough.