You should wash your stones. Maybe it’s only the contrast with the forums dark mode, but the black stones look dirty on my display.
Yeah, that previous example might have overdone it a bit.
How about this comparison?
Ultimately, my point isn’t to say use this specific off-white/off-black palette, but rather just the general point that the colors could be played with a bit.
I like the board and the lines of the left board, and the color of the stones of the right board…
I’ve changed the colour of the stones and lines, and decreased the spacing between the stones (although not as much as you suggested). Is that better?
The next thing I’m looking at is computer-generating/solving the problems. I’ll update this thread when I’m done, although that may be a while because this is a reasonable involved problem…
So I created a solver/generator, and posted about it here: I created a life and death solver and generator
Here’s a post to revive the original discussion 2 years later…
So, I’m currently in the process of adding some high quality puzzles to OGS from a book written by a pro. The problem I have is accurately judging the difficulty. Therefore, in line with the original suggestion and subsequent agreements, I think it would be better to have: (a) coarser categories (e.g. elementary, easy, medium, hard, very hard) to give an initial rough guide to the difficulty and these can probably be set fairly reliably by the puzzle creator; and/or (b) automatic ranking of puzzles based on successful / unsuccessful solving of the puzzle and the user’s rank at the time of attempting the puzzle.
I agree with the other posters that this is an important change for the puzzle section because the current system of puzzle creators having to assign the difficulty of a puzzle from a list of ~ 40 ranks is clearly resulting in many problems with misleading difficulty settings.
I wonder, 2 years after the original post, if there is any progress or plans on this idea?
I think it would be better to have … coarser categories
I agree. I’d suggest something like:
Or we could reduce the granularity even further, to just
- High dan / pro
- Low dan
you ask the players who tackle a go puzzle to score the puzzle on a scale that goes from very hard / hard / neutral / easy to very easy
you combine this with their rank
in time (if enough players of diverse ranking participate) I suspect that you will get good impression of the difficulty of a go puzzle.
Great that you do this. Love it.
Not sure if this will work, because if a good player sees an easy puzzle and decides to skip it, this will influence the result in an undesirable way.
Yes, I like your idea of players essentially voting on the difficulty of a puzzle and combining this with their rank to determine the displayed difficulty. After all, just because someone gets it right, doesn’t mean it was necessarily easy - it might have taken them a lot of hard work to get the right answer.
I do think the categories should be elementary, easy, medium, hard and then one of very hard, advanced or pro as these are in line with existing tsumego conventions, e.g. Cho Chikun encyclopedia of life and death.
Maybe time spent on solving the puzzle could be taken into account?
Agree with you.
I believe chesstempo would be a good model to emulate.
Had to google it. But still not sure what you mean.
As far as I know they’ve already figured out how to do this well for chess tactics, so their solutions would be desirable.
I think a categorisation which just compares the puzzles relatively to each other like this
is better than trying to tie the puzzles to a particular rank/rating
There’s many aspects to the game of go other than just tsumego, and they’re all bundled up in a persons rank. I think it’s usually too confusing to label go problems by rank, since there is no one universal rank, and also not everyone does tsumego. It kind of has the implied “if you’re this rank you should be able to solve all the problems ranked lower than this”, but that probably won’t be the case, and so the puzzles difficulty just won’t line up with ranks. I imagine you just end up with comments like this
I think you might mention though, that most normal go players can and should subtract a healthy percentage from their rank to find the right problem books. I.e. as an 8k, I can barely solve the 12 to 14k Tsumego. I have seen 5Dan players struggle with the 3K problems…
Even if its granulated further to
what really makes a puzzle a dan level? Is it the number of variations to try out, the number of moves ahead you have to read, the number of distinct tesuji involved etc?
The only thing that really makes sense from a puzzled collection is whether one puzzle is ‘harder’ than another. I think even if people find a puzzle easy (possibly because they’ve done the same or similar before), they probably can judge if this would require more work to figure out the answer from scratch as compared to another puzzle.
I think this makes sense, whenever you have the data available.
Tbh, I forgot what this thread was about, and I did like some things, so I probably had read it before. I’ll write two posts though instead.
So if we’re back to the original idea of trying to clean up the OGS puzzle section, then sure, it would be nice to be able to give a (separate?) ‘average rating’ to the puzzle or puzzle which depended on the level of the players who’ve successfully solved or failed the puzzle. It would in part take away the random judgement that “a 12kyu should be able to solve this”, and replace with “typically 12kyu’s have solved this”.
I think when you allow people to vote there’s a lot of factors to consider and you need a good system in place. You either need something simple like just an up/down/novote button which could be used by people to modify the apparent difficulty of a problem. If they get a free vote or choice of rating, there’s too much potential for outliers and trolling.
I think this only makes sense when you test the solution properly. For instance, the OGS puzzles randomly select a response from the tree, from what I remember. If you only had one line, for example the strongest response to the players move, then if the player knows that line, they’ll solve it quickly.
How do you know though that they’ve really thought through all of the possibilities and responses though, to really have
solved the problem? In theory you could be comparing data from people who click and solve and move on, against people who sit down and consider all the possibilities first and then answer.
the OGS puzzles random select a response from the tree
We should ideally have a feature which allows the puzzlecrafter to specify the strongest resistance and have that played every time possible.
That probably would be a nice feature. Maybe it could be a toggle depending on whether you just wanted to explore the puzzle and see what kind of responses were in the tree as opposed to just wanting to straight solve it against the toughest response.
It might be better as well to decide again the focus of the discussions.
Is the proposal
- to change the current OGS puzzle section, where users can still make their own puzzle collections.
- to implement a new puzzle feature, say like the Joseki Explorer. This one maybe would be more similar to sites like lichess.
I think I have different feelings depending on the options above. Like for instance, maybe you can just go ahead with the idea of assigning a rank to the puzzle based off of the players who solve it. I still think, what counts as solving it is debatable. I mean you could have something like key-lines to each puzzle, if there’s more than one tricky response.
One thing to consider is that Lichess has a lot of puzzles, I think because they used a bot to farm them out of users’ games. OGS puzzles are all created by humans and we don’t have that many. So what works well on a large scale may have unforeseen effects on a smaller one.
Why are people discussing having the puzzle author assign difficulty? Give the puzzles a glicklo rating, give the players a glicko rating, and let everything sort itself out.
In terms of what good tsumego software should look like: 101weiqi, but in English.
Currently the puzzle author does assign difficulty.
Does that make sense? If I’m reading it right, it sounds like each user gets an ELO and whenever they try a puzzle they ‘match’ against the puzzle and the ‘loser’ gains ELO while the ‘winner’ gains ELO. Why does it make sense to treat puzzles as though they were players playing a game against a user?
I just imagine a large scale deflation of puzzles ELO depending on the setup. Or stranger results. Imagine I’m a 12kyu and I’m doing a set of 12kyu puzzles (supposedly equal difficulty). If I get a bunch of the first few questions right, those go down in rating and I go up. Then the rest of the 12kyu puzzles eventually don’t go down as much when I beat them because I’m higher rating.
At least this would be what I imagine in the current puzzles setup. The first few problems of a set probably get played more than the rest.
Would you suggest a new Puzzles feature similar to the joseki explorer, or would you suggest randomising the order of puzzles in a set that the user uploads to compensate the possible bias of new users (to the set) trying the initial puzzles a lot?