But now this topic is much more pressing. People accuse each other of cheating in serious tournaments and cite something like “75% of moves are top-3 LZ choice”. But no one really knows how to determine if anyone is cheating or not.
How about we get a team of trustworthy players to play each other and secretly tell some of them to win by cheating. And then see if anyone can figure out which one are cheaters?
How about creating an AI that analyses games and decides how likely it is the players are AI? They’re amazing at classifying things, and this seems an easier problem than playing go itself. Also, I would enjoy the irony of it all.
Sounds like a fun test
Could even work just by uploading .sgfs like with the guess rank game…
I am not sure how good computers would be at classifying “human” move. Not sure what would you base it on. Personally I would be very interested in having some trustworthy batch of high ranking games (like from tournaments for example) and measuring what is the average and highest correlation of moves between them and several AIs (and say top 3 move choices). Unfortunately I suspect the numbers might be quite high. Too lazy to do it myself though
This is also an important question for chess players, and they had to deal with it for much longer already. As far as I know, they have some methods at their disposal. I know of some websites, where people using chess engines to cheat are (sometimes) detected. Maybe the moderators are using multiple chess-engines and see if one of them has high correlation with the moves played. Although I doubt that it works without some human looking at the data. I don’t know if those methods could be applied to go, but it’s an idea.
Well exactly, that is easy. Hard part is figuring out what correlation is “high enough” to say this person is cheating. (we are kind of getting off topic here S_Alexander, let me know if you would rather split the thread)
Well, you don’t have to know yourself what to base it on, as long as you have valid training data, the algorithm itself will learn what to look for. Similarly to how we don’t know which tactics AlphaZero should have, and yet it’s super strong.
There’s plenty of training data as well, with a whole library full of records. If you use ones before 2015 and from OTB tournaments you can be sure there is no strong AI involved.
Granted, I really do know close to nothing about neural networks, but I still don’t believe it would be that “easy”. If we feed it before 2015 data, we will naturally get just an onslaught of false positives, no? That’s the problem, people started mimicking bot play, but that in itself is not a “crime”… And I am afraid the window of deviation might just be too small to draw really reliable conclusions.
Not to say that I would not welcome any attempt at such a program would be happy to test it if anyone wants to take a stab at it.