Another Reason Not to Practice Versus AI

Beginners often ask how and whether to play against AI opponents, and experienced players might say, “a little is okay, but don’t overdo it,” with varying degrees of caution. The reasons given are usually about AI style. AIs play differently from humans, so there is an important limitation about how well practice games against AI will prepare you for play against humans. But it goes deeper than style or quality of moves alone. There’s something fundamentally missing from the whole human experience of playing the game.

Real intelligence as it occurs in humans and animals is very different from artificial intelligence. It may as well be an alien mind. And not those cute anthropomorphic aliens with two legs and two arms. When we encounter an AI opponent on the goban, we can’t engage with it in the same way we would encounter another naturally intelligent agent. We can anticipate its tendencies, like playing an early 3-3 invasion (which we humans have since re-incorporated into our collective knowledge base, and digested as our own understanding), but we can’t form a “theory of mind” of AI.

“Theory of mind” is the way that we model another being’s feelings, knowledge, and intentions within our own minds. This is a part of playing go! To play a game is not just abstractly reasoning about the board state for each turn. We engage with our human opponent in long term strategizing by trying to anticipate what they’ll do and why. This naturally happens “under the hood” in a deep way. Modeling another person’s mind subconsciously makes a recursive parallel processing network for us, and we can lean on that when we learn and play.

[citations needed]

7 Likes

I disagree wholeheartedly. As a perfect information game, feelings and intentions are not part of playing Go, but human flaws in their Go playing. The better a player is at Go, the less feeling should matter.

Go is purely abstract reasoning about the board state for each turn.


Even then, I disagree with the statement that it’s impossible to form a “theory of mind” of AI. I’m convinced that as AI becomes more and more advanced, we will be able to “model subconciousness” of the AI, and assign it character traits or feelings.

8 Likes

I’ll grant that there is a school of thought that says chess and go should be approached as board states only. “Play the board, not the person,” or sumsuch, right? I think this is not really true for chess, and even less true for go.

It’s not our flaws but our strengths that we have the capacity to play with feeling. In trying to describe reasoning behind a move, teachers often refer to the feeling of the move when it becomes ineffable.

I’ll go so far as to say that idealizing abstract, position-only play is dehumanizing. Maybe we’d even agree on that?

Someday not too far away we’ll probably have AI that mimics human-like play, complete with artificial stupidity, and we’ll be able to imagine what they’re “thinking” without falling in the Uncanny Valley, but that experience would be a lie.

2 Likes

I think we agree on this, but you see it as a bad thing, and I see it as a good thing :wink:


For elementary learning purposes it is perhaps better to humanise the meaning of the best moves on the board, or the intention of an opponent. But, I think this should only be done for beginners, and it should quickly be established that this is a superficial communication layer on top of an inherently abstract game that does not care about intention.

The best example of this is probably trick moves: you know they exist, and they work on humans that don’t know them, but they’re essentially bad moves since they can be countered.

4 Likes

I agree with your basic premise that playing humans is better than playing AI, but I am (and, correct me if I’m wrong, but it seems you too are) having trouble articulating exactly why.

I think the commonest argument, which you mentioned in your first post is that playing vs. AI teaches bad habits, which was very true pre-AG, but is becoming increasingly less true.

I usually play the last few (maybe ~6 tops) moves through quickly before beginning to think about a correspondence game move. I used to assume that this was in error, but I no longer believe so. I think relevant a more recent belief I have come to possess as a result of my research into historical and modern mnemonic techniques: humans are incredibly adept at two kinds of memory: stories and locations. I think the manner in which one can undertake to memorize a go game is circumstantial evidence of the first: very few, if any, memorize a go game by the coordinates of each and every individual move of the game with no relation to prior and potential moves considered; rather they memorize the first ~4 moves, and then remember every move thereafter as a function of the reasoning, real or constructed, behind it: another word, line, verse in the ballad of the game: a story.

I think, therefore, that looking at go as purely a sequence of solving for a coordinate given a board state over and over again with no continuity assumed, is, while theoretically correct, a fundamentally inhuman and therefore wrong, in practice, way to view it. It is not wrong in fact, but in result: it will, being contrary in perspective to human strength, result in worse outcomes for humans, despite being more correct in fact.

3 Likes

I agree.

When you play near your opponent’s weak group to get some profit or a better position out of it, you can feel his fear and know that he will respond without even considering other parts of the board that may be more profitable or urgent.

The AI can’t be reasoned with, it can’t be bargained with. It doesn’t feel pity or remorse or fear.

3 Likes

The title specifically says practice. So why not the best of both worlds? Review and practise with AI but also enjoy mindgames vs human opponents.
Ideally the opponent’s mental state should not matter. But in practise and in reality there are often many equally strong options and picking whichever annoys the opponent more wins you more games. Just don’t fall for wishful thinking.

1 Like

I’m not arguing this.

Humans operate best when we model our opponent as another person because interpersonal interaction is a natural function of humans. However, this does not necessarily imply exploitative play, to borrow a poker term, nor is such in any respect necessary in go outside handicap or reverse komi games.

1 Like

TPKs need sufficient practice with elementary tactics (for example, knowing not to crawl on the first line) before they can develop a meaningful strategy and anticipate opponent moves. The counter-argument is that AI opponents are not a good way for TPKs to practise elementary tactics, compared to elementary tsumego or playing other TPKs.

1 Like

Let’s call it psychology in go. I think it applies in many more aspect of the game: time control, style of play etc…
It’s something which is not so attractive for go players. No book very few words on that in the 4 corners of the internet…
At least it seems hard to deny its existence.

But why not take this side in account instead of trying to be as impersonal as a software could be?

1 Like

The problem, IMO, with playing an AI is that it has no human flaws. These flaws are meant to be real and are important when thinking of the game. It has to be said that humans look at things much less objectively than computers. Now there is no problem with logic based thinking but the problem is when you face a person who is not playing the board but playing with emotion they can make rash choices and moves that a computer would not. They take more risk and can be more aggressive and unpredictable. So while you should play AIs don’t only play AIs

1 Like

… I am (and, correct me if I’m wrong, but it seems you too are) having trouble articulating exactly why.

I was going for “theory of mind” when engaging another human as my point rather than a general explanation of why.

The game as “story” analogy is fine for meaning and memory for us, but I think there’s more to it than that.

1 Like

I agree that Theory of Mind (one’s ability to accurately predict what another party knows) is helpful for playing go by the reasoning I earlier propounded; are you considering the engaging with another Mind/Being to be a goal in and of itself? I do think that’s another part of the reason I don’t want to play vs. AI.