ELF OpenGo weirdness

Hello friends. I analyzed some opening positions today with Lizzie 0.6 loaded with LeelaZero v0.17 and converted ELF OpenGo v2 weights. This is the strongest freely available analysis tool I’m aware of, but the output got pretty strange at high playouts, as shown below, with my pointer hovering over the top candidate, c17.

My questions:

  1. Is the micro enclosure at 6 any good? Other AI don’t play it, and I’ve only ever seen it as a response to a large knight’s approach. It’s secure but very narrow and seems to contradict the general AI trend of speedy openings.
  2. Is the armpit hit at 4 any good? ELF likes it even if the order is flipped: 5-4-3. But it invites White to secure the corner territory and leaves Black with one fewer liberty. How can this be a good exchange?

All comments are welcome. I’m just trying to make sense of ELF’s move preferences.

6 Likes

If that was any good, we would certainly have seen it in pro games already.

Would you have more details to share on how to install this configuration?

Sure. Here’s the source files:

On Windows, I downloaded each one, installed Lizzie, replaced the v0.16 LeelaZero files with v0.17, put the ELF weights in the Lizzie directory, and told config.txt to call the ELF weights instead of the default network.gz.

It’s possible my setup is messed up, and I’d be very interested if that were the case.

2 Likes

Maybe try using default v0.16 LZ? I remember having trouble with replacing LZ exe file.

1 Like

Sorry to be thick. I am already using Lizzie 0.6 with the latest LZ weights from April. I see the ELF link, but I am not sure where to find its LZ conversion.

An interesting test would be to have the latest LZ weights play the ELF weights on Sabaki.

1 Like

See new link.

1 Like

I am getting regular options on a first test.

Update: now I see it too! Although C6 takes over if I wait long enough.

2 Likes

Posted to Reddit for further discussion: https://www.reddit.com/r/baduk/comments/bge15j/new_fuseki_leelazero_v017_with_elf_opengo_v2/

FWIW, after wQ4, the only moves AlphaGo Teach considers for black are C6 (winrate for white 53.6 %) and P17.

“Best” variation after bC6: https://online-go.com/review/388633

Feels highly unusual in its own right. AGT gives it 52.5 % for white.

People seem to be confused because you played white Q4, not R4 as in @mark5000 's original example (I am too).

I reproduced the 3-4 armpit hit and micro enclosure with unmodified Lizzie 0.6 and ELF converted weights. ELF started choosing them early within a few thousand playouts. I’ll check native ELF next.

EDIT: I reproduced them with native ELF too. This is using just 3200 playouts.

So I’m back to square one: these moves are network preferences of the strongest freely available AI at the moment. I realize it’s not an oracle, but it’s also, like, 14 dan, so I’m trying very hard to make sense of its moves.

1 Like

Here’s another position from 1. r16 d17 where ELF would play 2. c16 even at kind of low playouts. It does it again in the LR corner (different order), encloses micro in the UR corner, kicks the high approach in the LL corner, then goes all AlphaGo with move 25. New meta or neural network gone wild? You decide.

EDIT: after 400k playouts the winrate gradually drops 4% and ELF starts to prefer 2. d3 without the c16/c17 exchange. It’ll still play it afterwards, however.

EDIT 2: 1 million playouts. c16 became the top choice again.

2 Likes

My bad… Shouldn’t be doing stuff like that at 2 in the morning. blushes

These variations are highly fascinating to this 12k. I keep adding them to the review that I posted on Reddit, I hope you don’t mind. Thank you for your work, mark5000 :slight_smile:

2 Likes

This is truly interesting, but I am not really surprised. You can think about the fuseki preferences for these strong AI as numerical solutions to the game of Go. At points where the best move more obvious, the AIs will agree more often, but when the best move is uncertain, like the opening where many points have close to the same value, it’s very reasonable that the solution will have more variance. The fundamental method behind LeelaZero does not produce the solution to Go (as a mathematical problem) but only a solution. It’s my personal belief that there are multiple (in the dozens) of different fuseki patterns/theories that are nearly equal. The difference between them probably can only be determined through theoretical analysis. Unfortunately, it seems that humans lack the requisite theoretical tools to undertake such a task.

4 Likes

I think you’re on to something there. I started this topic because OpenGo v2 differs so much from predecessor AI such as AlphaGo, Fine Art, PhoenixGo, LeelaZero, OpenGo v1, Minigo, and Golaxy. I wondered how the “solutions” can all be valid. Take Minigo v17 0961’s output for example:

This is so much more conventional to my eyes. But OpenGo v2 wouldn’t be caught dead playing 4-4 points at >10k playouts. Two star points to OpenGo v2 is an 11% mistake. This means one of two things: (1) OpenGo v2 knows something other AI doesn’t, or (2) OpenGo v2 is kind of badly misjudging the above positions. Right now I’m leaning towards door number 2.

1 Like

Could it be that you are challenging the AI with unconventional opening patterns for which it has little knowledge? One of your examples involves a 3-3 point, and the other opens with four 3-4 points.

I just started an ELF-LZ game on Sabaki, and so far it looks much more conventional (well, from an AI perspective).

2 Likes

Yes, that’s what I think is happening. ELF was trained on 800 playouts a move. At 800 playouts, it opens with a 4-4 point and White replies in kind. If given more playouts, it eventually switches White’s reply to a 3-4 point, which it considers better, and will eventually switch Black’s move to a 3-4 point, too. It’s from that starting position that I ran the above tests and noticed weird moves.

My takeaway is that more playouts isn’t necessarily better for ELF v2, and it can be unreliable in positions it wouldn’t ordinarily choose, even if that position is one 3-4 point and nothing else on the board. It’s a little disappointing.

2 Likes

FWIW the finished game. ELF v2 (black) wins by 1.5 point. An epic game with two large ko fights towards the end (each ko fight lasts more than 70 moves…).

I ran another game with ELF as white and 5.5 komi. LZ resigned. I am now very tempted to switch to ELF for game analysis, despite the weird opening analysis showed by @mark5000.

2 Likes

Is there any way to combine them, so you get suggestions from both ELF and LZ at the same time? Something like rating the best moves for both AI and then choose the move that both consider to be good. Surely two overwhelmingly strong AI are even stronger than each of them individually, right?