I like the idea of a mechanic to “cash out”. I proposed a resignation mechanic in the past, which wasn’t received well I feel. I understand the concern that players who are not committed to the game might resign at inappropriate times, so this relies on the players to handle resignation carefully.
But this new idea would solve the concerns I had handily, perhaps without such a drawback.
@Shinuito I don’t think it’s so simple. With these rules I still think staying on board as long as possible is important. The more people drop out before you, the more potential for expanding your area.
At first glance, this scenario makes a lot of sense to me, so let’s try to find some flaws in it. The first I can think of is this:
Imagine that everyone but two players have already dropped out. They each have alive groups, but there is also some empty space left on the board to contend for. Will they play out the endgame “normally” and then drop out?
Imagine for a while that they are perfect collaborators (they have a binding agreement outside of the game that they will share their profit evenly). They could then game the system like yebellz described for scenario B: one of them almost fills up the board, then fills their own eye, drops out, allowing the other one to capture.
The reason that they prefer this “100-100” split to a “50-50” split (approximate numbers, in practice might be more like 98-100 instead of 53-47), is that their respective portions of the prize pool have now grown compared to the other players.
But if they aren’t perfect collaborators (which should really always be the case inside the confines of the game, using any external agreement seems like cheating), this trick might not be so easy to use. If A let’s B fill up most of the board, why should B then fill their own eye? Allowing A a larger piece of the pie makes B’s smaller.
So what if it’s possible for B to first reduce their own group to one eye, before gradually filling the board without making any more eyes, so that A is guaranteed to be able to capture in the end. This already sounds very unlikely, but assume it could be done: then why should A actually let B carry on with claiming the board? B is killable from the beginning, and by capturing B earlier than “planned”, A makes their own piece of the pie bigger.
So, this seems to me to be a basic prisoners dilemma: both A and B would be better of by working together than by just finishing the endgame normally. But, working together gives each of them a chance to betray the other for selfish benefit. Knowing that each of them should betray the other (once in that position, there is no rational reason not to), they are unable to cooperate, and forced to settle for finishing the endgame normally.
So… not a game-breaking flaw yet, I think. Can anyone come up with a scenario where players could collaborate to artificially increase their score, which isn’t vulnerable to betrayal?
Edit: Even though it might not be game-breaking, this is clearly a flaw which would be nice to avoid. Having a rule which cannot be exploited even by perfect collaborators would be preferable.
The remaining player claims as much area as they can before dropping out. The player that dropped out would (under the right choice of incentives…) already have made all their alive groups pass-alive. This situation seems relatively straightforward to me, but maybe I’m missing something?
In any case, due to some of the above troubles, I’m leaning towards using normal scoring in combination with the drop-out mechanic. There is always the option to drop out, but if it’s possible for you to stay in the game to the end this should always be preferable. For instance, let’s say you get $1 per point if you drop out. Then at the end of the game, every player that is not dropped out or eliminated gets $81 extra in addition to their own $1 per point (or maybe $10 per point for those players seems more reasonable. I’m just trying to get the relative payouts of different scenarios correct here).
But this is rapidly losing the elegance of the original idea
Also, all those extra sets of $81 make the game non-zero-sum again, so it would be vulnerable to the players letting each other stay in the game and share the profits.
I knew when I originally started thinking about the incentives that there probably was no perfect solution, but I wasn’t expecting it to be this difficult to find even a somewhat acceptable solution…
Fix a prize pool of $2 * (number of players) * (number of intersections) at the start of the game. When you drop out, you get $1 for each point you have on the board. Additionally, after everyone has dropped out, the remaining prize pool is split among the players with stones on the board, with each player getting the proportion (their current area score) / (the sum of all players current area scores).
That “$2” in the prize pool formula is to ensure that more than half the prize pool remains at the end. It could be made bigger to further increase the importance of the final position. The bigger the value, the rarer situations like this become:
You could actually make it so that a single owned intersection at the end of the game is always worth more than dropout. I’ve been hesitant about payouts which seem too extreme in the difference between highest and lowest, but since this is all theoretical anyways it might not be a problem actually.
To get a feel for what this could look like, assume that (in a 9x9 game with 5 players) we start with a price pool of $10 000 (lower price pools could still guarantee that the end position is more important than anything else, I just chose this since it’s big enough and round).
During the game, two players drop out with 13 and 25 points on the board respectively, walking away with $13 and $25. All their stones are later captured (leaving them on would mean fewer points for someone else).
The other players have at the end of the game divided all the 81 points of the board like so: 35, 34, 12. When they drop out, they get $35, $34 and $12. The remaining price pool
(notice that a single point difference is worth more than $100, which is more than the largest possible dropout score)
To me this all looks quite reasonable. The possible issue is that a player finds it meaningless to worry about the difference between $12 and $13 in a context where you could have won thousands of dollars instead. The idea here is that every player should try to always maximize their winnings, regardless of whether their next move gains $1 or $1000. If everyone follows this basic principle, I believe this would work well (or at least I don’t immediately see a problem with it).
The prize pool can of course be kept smaller to make the payouts more even, if you don’t mind the occasional strategic dropout. It may turn out that even
$1 * (number of players) * (number of intersections)
is enough in practice, since players dropping out will rarely fill up more than half the board. I just wanted to show above that with a big enough price pool, gaining a single point in the end position is always worth more than a strategic dropout. With a more modest prize pool this would still hold most of the time, but not always.
I like this idea a lot, but I feel dividing the prize pool at the end of the game may still lead to players having no reason to play, since they can be sure not to reach the end of the game, but also have no reason to help others, since it won’t increase their prize.
How about a payout system, but the amount is multiplied by (the number of eliminated players + 1)?
Do you mean that when you drop out, your payout is multiplied by (1 + the number of players who are no longer in the game)?
If I’m understanding correctly, this is to motivate staying in the game longer than another player, even when this doesn’t directly affect your score. It’s quite important how to handle the edge case where players drop out on the same turn.
If both A and B want to drop out, but they both would prefer to wait until at least the move after the other one has, they will both wait until the last possible moment. This seemed a bit weird to me at first, but it might be a good thing actually. It would incentivice them to help speed up the capture of the other.
This is my attempt to combine the good parts of previous ideas:
Start with a prize-pool of at least $ (number of players) * (number of intersections).
Keep the same elimination mechanic as in the first game, and add the option to drop out at any time. A drop-out is submitted instead of a move.
After all the moves of a round has been made and resolved, if a player has just been eliminated or dropped out on that round, every player gets $1 for each point they currently have on the board. This includes players who have dropped out earlier, but still have stones on the board. (if multiple players drop out/are eliminated in the same round, the payout happens multiple times)
When every player has dropped out and all payouts have been made, the remaining prize pool is divided according to the final board position. Note that a player could have dropped out earlier in the game, and still get paid for an alive group in the end.
One consequence of this ruleset is that you can’t wait too long to drop out, since you don’t want some of your stones to be captured on the same turn. This is emphasized by the fact that another player may want to capture you before you drop out, to make that space their territory in the payout after you drop out (and even if it won’t be a territory, making your payout smaller leaves more money left for the players at the end of the game).
I think a future game should try one of these fine grained scoring proposals, where scoring a strong second that takes more of the pot can actually be preferable than scoring for first place with a smaller relative share.
I do prefer the alternative concept of winner takes all scoring, but I think the best way to support my perspective is for players to try it both ways and see for themselves what they prefer.
Ultimately, providing a fine grained payout structure does not necessarily give full control over the actual preferences of the players, especially when the money being distributed is entirely a hypothetical number. Since it is essentially just an abstract score, the players can care a lot more about saying they achieved the prestige of coming in first rather than maximizing their payout. After achieving an unsurpassable first place position, it might not be so interesting to really work hard toward increasing one’s score further. Also, for the players in the weaker positions, there is only the weaker goal of maximizing one’s score to marginally increasing one’s payout, which is just fictional money that has no real intrinsic meaning, when others are so far ahead. I can’t see the trailing players necessarily caring that much about making the moves to increase their score from $15 to $16, when there are many other players with scores probably in the hundreds. In fact, I think that many players might still care more about hurting the outcome of certain other players (such as those that betrayed them and left them in such a weak position) rather than working toward the best score (or even improving from say 6th to 5th place). For such players, it might be more meaningful to say that they got even against the person that ruined their position by ruining their nemesis as well. If Bob’s betrayal was the reason that I was dropped to say 5th place with a payout reduced to the likely the tens, I would care more about ruining Bob, dropping his payout by a lot, even if such a strategy would likey reduce my ultimate payout.
In addition, no payout structure (with fictional money) really incentivizes players from dropping out at suboptimal times, maybe due to getting bored or losing interest for not being able to win. The only way to get players to take the game seriously and play their best through the whole game, without providing actual meaningful external rewards, is to initially recruit a group of dedicated and serious players.
Further, the “strong second” effect inherently introduces pathological changes to the strategy. This view comes from playing a lot of regular Diplomacy, where a score maximization goal (capturing the most supply centers even if not the leader) leads to a game that greatly marginalizes the players that get off to a weak start. Often, the two or three players with the strongest early position can run away with the game, while the rest have very little leverage to break or shift such an alliance. On the other hand, with a winner takes all objective, the ability to threaten to throw the game is a very significant tool that can be wielded by more players.
Finally, fine grained payouts are not necessary for players to assign fine grained significance to the details of how the game plays out and preference to how other players finish. For example, players would often build preferences about whether another wins or loses, even when they have personally lost hope in a positive result for their self.
Here’s another form of playing the game that changes the goal and promotes collaboration more:
There is a prize pot (say $1000) which is distributed among the members of the winning faction at the end of the game, proportional to the area each member owns.
All members of the winning faction need to agree on the members of the winning faction.
The area shared by the members of the winning faction covers at least half of the board.
Black has 27 points, white 23 points, red 9 points, yellow 22 points.
To win, the winning faction needs at least 41 points. It is in the winners’ incentive to have the total be as close to 41 as possible, since their own share of the prize would be larger. Although black has the most points, on this board white and yellow would agree to be the winning faction, since they have a total of 45 points. White would get $511, and yellow would get $489.
(Most of the below post is probably pretty obvious to anyone who has played diplomatic games before or has any knowledge of game theory. I don’t belong to either category so I wrote it for myself and for others like me )
I made this comment in the middle of the game:
This was a bit naive of me, as has become clear to me while playing and discussing the game, so let me amend my position.
Here’s how I might have reasoned earlier (not a real quote, just approximating my thought process):
Players without any clear goals, who are still able to affect the game, is in my opinion a bad thing. This is because I believe having ways to predict and reason about the opponents is very important, both for making the game enjoyable and strategically deep. Thus the game rules should specify a utility function that is as fine-grained as possible. Ideally, every player should try to “play rationally” to their best of their ability, under the specified utility function.
What wasn’t clear to me then was that, even if it were possible to force everyone to “play rationally”, this wouldn’t necessarily be a good thing. Consider this example:
We play go with 3 players, but the players take turns submitting moves. The goal of each player is to maximize their own score on the board. We further dictate that you should play under the assumption that both your opponents are only trying to minimize your score, without caring about their own.
(If it seems weird that each player is playing under some assumptions that look false, just think of it as each player optimizing for their best possible worst case scenario, given no information about the others.)
(Also ignore for now how impossible it would be to play 1v2 normally. Assume we are already in an endgame with some immortal groups on the board or something. The details don’t matter for our thought experiment.)
The result is a game where we can actually define “perfect play” just like we do in normal go, since we’ve basically reduced it to a perfect information 2-player game for each individual player, “me vs everyone else”.
That may sound pretty good, especially if you like being able to “read” properly like in normal go and so on. The bad news is that we’ve completely removed the diplomacy from the game.
For a less extreme example, consider the game we just played. Would it be a good thing if we somehow could require each player to always play for a win if possible, or else a draw if possible? No, not really. Strong diplomatic play may be to make a threat to play “irrationally” later on, to force cooperation from another player. Note that you may not actually have to make any “irrational” plays, only make a credible threat that you will. But if the rules forbid you from following through on such a threat, then it can’t be credible and won’t work.
I think these examples demonstrate that in some way there is an inverse relationship between the two desirable properties of being able to “read” (or more generally predict the other players), and being able to do diplomacy.
So does this mean that a more fine-grained utility function is bad for the diplomacy? No, it just means that we have to completely embrace the fact that the given utility function is merely a suggestion, and we can never fault a player for playing “irationally”. Still, the utility function given in the rules plays an important role. It tells you what to play for, in the absence of other considerations. It gives you a goal from the absolute beginning of the game (before you have made any friends or enemies), and with a fine-grained utility function you will almost always have something to play for all the way to the end.
If you want to ignore the given utility function, you can. But it’s there when you want it.
All this to say that I’m still very much of the belief that a more fine-grained utility function is better for this game. It should be clearly specified in the rules, but it should also be clear that it is different from the objective rules of the game, since you can play however you like.
I believe one can say a bit more about what constitutes “good” play, and when it’s appropriate for a player to seemingly go against the given utility function, in the context of repeated games. For my last proposal a few posts above, where a prize pool of $ (# players) * (# intersections) is distributed, I like to think of this as each player paying $ (# intersections) to participate in the game. Then we can imagine a long string of such games, and the goal is to make money in the long run.
But I don’t think we have to write “imagine this hypothetical scenario, and play accordingly” in the rules. That might be helpful for training an AI to play strongly, but I think most humans will quite naturally converge towards the same opinion of what constitutes strong play, just based on our psychology and the human context of participating in a board game.
So, should we start discussing the details of the next game? There’s no rush in starting the game, but I expect it might take us a while to agree on the rules
I would be happy to host it, and I’m thinking we might want to try a slightly larger board, probably 13x13. I think that board size could work well with 4-8 players, depending on how many are interested in playing.
A bigger board will lead to a longer game (even if we have the same amount of players per area, since some big groups will probably be captured), so hopefully we can find enough players who are prepared for that. Personally I think the more interesting bigger board is worth it, but we could also stick to 9x9 (or maybe consider 11x11).
I would suggest that we either play completely without the “extra days” or maybe limit it to one extra day per player, to keep the game going at a reasonable pace. The first game seems to have gone about 50% longer due to extensions, and it seems like they mostly happened when players forgot about the game.
But all of the above are just my opinions, I hope everybody interested in participating will share their thoughts
I would love to play another game. After reviewing the discussion about the rules, I’m unsure if I still agree with my own statements and suggestions in the past. I don’t think that the status quo should be changed anymore, but I’m down to try one of the new suggestions too.
However I will say that private discussion with other players can potentially be quite time consuming, moreso if there’s even more players. If the number of players exceeds eight, then I expect the game to be very chaotic, and I’d have to limit my private chat with each opponent to about 1-2 messages / day. But a chaotic game also sounds fun, so I’m not necessarily against it
I still think there should be a benefit for players to drop out / play on when lost. I’d like to think in terms of what benefit a move has, and it’s not possible to find benefit for a move when there is no chance to win. Ideally at each moment a player will have a goal that’s not artificial and it should always be possible for them to play in a way to increase their chance to win.
Asking people to drop out to “cash in” their prize only works if there is an actual prize. Getting a virtual prize yet still ending last place, is less fun than playing on trying to influence the game, even if it costs you your virtual prize. So, although I like the prize pool suggestions as hypothetical improvements, and I’m sure it would work well in an actual betting game, I’m not sure if it will change much about these informal forum games.
For this reason, I’d really like it if people could form proper alliances, in the sense that members of an alliance win together and need each other to win. Especially if we have a game with, say, 8 people, this will become quite interesting.
Shall we play the next game with my suggested goal? Or does somebody have an alternative comparable way to incorporate a concrete goal for weaker players?
I’m also ready to host a game, if @yebellz wishes to try playing
I don’t see how to practically limit private discussion to just a few messages a day without greatly distorting the interaction.
However, there are other forms of diplomacy:
Full press – which is what we just did, with both a public channel and pairwise private channels
Public press – only the public discussion channel is available
No press – no discussion at all is allowed, i.e., interaction between the players is limited to the board plays, submitted moves, and voting (if those are public)
The latter form is also called “gunboat” and is sometimes derided as “diplomacy minus the diplomacy”, however it can still be an interesting game in itself.
I would be willing to either play or host again. I like the idea of trying a larger board and I think we should try one of the amended objectives, but let’s discuss that further.
Oh I wasn’t suggesting to change the rules concerning communication, just that I personally would try to keep the number of messages at a reasonable amount.