One way to make a problem collection is to collect human games and use AI to identify blunder moves; as e.g. AI-Sensei already does (but I want more learning than what I can get from just my own blunders). These could then be graded based on user performance.
Has this been done? I am particularly interested in early game, where it is often not possible to confidently evaluate moves without AI.
Such a problem database might include point-evaluation data. A nice example of such a problem collection for Backgammon can be found at https://www.bgtrain.com/
One might also want to include (AI-generated) possible continuations for the top few moves selected by the AI. It should not be expected to fully explain why moves are good or bad, or to guarantee that moves not on the list are bad.
I doubt it’s something easy to build, especially to categorize or ordering the blunders (to find the same through games).
It may work in the opening somehow (not fully sure) but after that, considering a global situation and stones rarely at a similar place, well seems a bit too much expectations to me.
A blunder alone does not make a good puzzle. Continuing locally while you should tenuki might be a blunder. Doesn’t matter that much where you tenuki to. But a blunder with a single best move that is (say) 5 points better than the second best move might give better candidates.
Early game is tricky. KataGo tends to show multiple options within 0.2 points of its preferred move, and even professionals are sometimes happy to play moves that are outside the AI’s top three because it suits their style better and the score drop is not significant. Most of the old books saying “in this fuseki, A is the only move” are simply wrong. See 501 Opening Problems and a committee of AIs • Life In 19x19 for some examples.