The future of Human vs Computer Go in a post-AG world

In or around 2015, Google’s AlphaGo was the original impetus that started a sort of evolutionary “Cambrian explosion” in the world of Computer Go AI. At this point it is fair to say the genie is out of the bottle and there is no going back. Go has existed in some form or another for roughly 5000 years, and it was only in the last year and a half that computers have suddenly reached a level that surpassed top pros. So while AlphaGo was the first, it will certainly not be the last. In a few years, more and more AI programs will proliferation their way to besting and beating the top pros at this ancient game. From the grand scheme of things, a few years compared to a few thousand years is but a blink of an eye. AI has won, forever, and there is no going back.

But that is not to say that in the future Human vs Computer Go games would be futile or meaningless. We must merely adapt and evolve the nature of the competition and the structure of the dynamic to make things more interesting and to put things on more common ground without giving a handicap or a surplus of moves or stones.

One way to naturally do this is to compete on energy parity. Kudos that the AG version that has won against Kie Je is reported to only use 10% of the total power requirement compared to the AG19 version of Lee Sedol games a year or so back. This is a huge step forwards but still it uses far more power than the human brain or even the entire human body. So as computers get stronger and stronger at Go, ultimately and eventually one metric to target would be the first computer AI Go program that can beat a top pro while using same equivalent to the top pro himself. This can be in the form of scaling down the AI to use less and less resources, or to artificially cap the thinking time of the AI, to give it less time and thus forcing it to use less total energy consumption, in order to match that of the human, so that at the end of the game the AI used no more total power or energy than its human counterpart. This would be an interesting goal post and qualitative challenge.

Another way to make human vs computer matches more intriguing is to give the human a limited number of redos or undos. To essentially allow the human to go back to a particular move, to play out a different branch to see if that new path or new route might entail a better end result or maybe even end up in a winning game. While this may seem like “cheating” at first glance, it is really not. Let me explain.

Computers have always been better than humans at brute force calculations. This has been decided since the days of the very first pocket calculator. Your mobile phone can calculate arithmetic operations orders of magnitude faster than all the top mathematicians in the world added together could do in terms of team mental math or group pencil and paper. But we should not confuse brute force calculations as “artificial intelligence” any more than we would say that a rudimentary autopilot program capable of holding an airplanes altitude without deviation is more “intellectual” than the human pilot. Likewise we would not attribute “artificial intelligence” to a fast calculator.

Everything from brains to computers are actually higher level emulators, at least second order or higher computers. It is the laws of physics and the individual quantum effects, the atoms and molecules and forces of the universe that combine together in such and such a way as to actually do the “calculation” and the “computer”. Everything else is a virtualization and symbolic computing layer on top of that. So be it an abacus or pocket calculator or slide ruler or human brain or silicon processor, these are all second/higher order computers within a computer, virtual emulators of processing, if you will. The reason Intel processors are faster than the abacus is because the integrated circuit is far smaller and thus more efficient than the larger macro-sized beads, but make no mistake, these are all symbolic representations of calculations, the smaller we can make these representations the more powerful of a processor we can have. But at the end of the day, the universe is the only true computer, and everything else is a higher level emulator, the only question is how efficient is the emulation and the symbolic emulation.

Humans are better than computers at some things while computers are much better than humans at a lot of other things. The question is fundamentally why?

It is akin to building a virtual CPU inside of Minecraft. It would be much more efficient not to have to have these higher order virtualizations which actually add to the symbolic unnecessities by artificially increasing computational overhead by orders of magnitudes and for no good reasons. This is why the human brain cannot compute arithmetic faster than a crude calculator. Because the brain was not hardwired to directly manipulate so closely with efficiency level the way a silicon processor can symbolically reduce numbers to more atomic bits of binary zeros and ones and thus calculate them much faster, and without error. It is not because the calculator is more advanced than the human brain, but that the human brain has to visualize and add symbolic layers to concept such as numbers (and cannot directly manipulate them on the molecular level) and thus just like in the earlier virtual CPU inside of Minecraft example, this is why humans cannot outdo computers when it comes to things like brute force arithmetic calculations.

Coming back to Go games, MCTS is akin to brute force calculation and not the sort of “intuitive artificial Intelligence” people think of when they think of AI outsmarting human intuition. Traditionally humans have had the advantage against computer Go programs of the past due to the large search tree of the Go game made it not possible to do an exhaustive brute force search. Therefore to a large degree this thing we call “human intuition” came into play, and without computers being able to emulate that intuition, there was no way computers were ever going to catch up by sheer scaling up of brute force searches. Modern AI Go programs have changed all of that because for the very time in human history, computer programs are now starting to catch up with human intuition when it comes to the game of Go. And with the advantageousness of brute force calculations that are not afforded to humans, the MCTS and other “brute force” simple algorithmic methods help carry Alpha Go and DeepZen and others the “last miles” to the “finish line”, finally boosting it above superhuman levels of overall effective play.

So the question is one of how do we highlight the true “intelligence” aspect of artificial intelligence and bring that real AI to the forefront and diminish the whole brute force aspect of modern “GO AI”, which in truth is actually a hybrid of AI and that of good old fashioned brute force calculations which humans were never good at and never had a chance or a hope against computers anyway.

A way to do that would be to give human players “redo” attempts. Since by definition Go is not a ‘brute force-able’ game anyway, (ironically, otherwise computers would have beat humans at Go back in the 1960’s without need for DCNN etc) a 25kyu player could have unlimited redo’s and could spend the rest of his life and would never win a single game against something like AlphaGo 2017. But on the other hand it is fair to say that someone like Ke Jie and Lee Sedol, given sufficient redos, might find a particular branch in which they win out against AG and the likes of top AI Go programs in the future.

Since humans are by definition fallible and machines are by definition do not get tired or sleepy and do not slip up and make “miscalculations” etc, it would only actually be “fair” for a human vs computer competition to allow the humans to have a set number of attempts per game.

Or if that is not acceptable, then a more natural way to level the playing field to be fundamentally more equitable would be for AI programs to rely ONLY on the neural network component for play and not use any MCTS or anything remotely like it. Another possibility would be to allow the human to have a real board to do analysis, to let him put down stones and to analyze the different variations in a manner that does not prone to victimize him to the fallibility of his own mind and memory and visualization which cannot ever compare to the brute force ‘speed of light’ search that a computer could instantly perform.

Finally, once GO AI programs have cast aside the MCTS and other functional equals to the MCTS methods and the brute force components that humans could never compete against and was never really ‘artificial intelligence’ anyway and then to rely solely and exclusively on the emulation of neural networks instead, then the next step would be to reduce the dataset the AI has access to in a way that is quantitatively fair to the human counterpart. Train an AI from scratch and only give it access to play a very limited number of games, on par with what a human would be exposed to in his lifetime.

What is intuition? It is ability to extrapolate and learn from a subset of data. AlphaGo has played hundreds of millions of games within the span of a year or two. Is this true intuition when a top pro could only ever hope to play a tiny fraction of a fraction of that in his entire lifetime?

So the ultimate goal in AI would be to reach energy parity with human brain itself, to only use Nueral Network and no other brute force searching and whatnot, and exposing it to a limited dataset of games, comparable to what a human would be exposed to.

In human vs computer games, humans should be given a generous number of redo’s. The human should be given much more time than the computer, the computer should be capped to only a few seconds thinking per move, and not allowed to think on human time, no komi should be given to the computer, human should be allowed to pick which color, and finally the human should be given access and ability to consult with other humans during the game, and as well as ability to use a real go board in live match to physically play out and experiment with variations.


I especially like the idea of limiting the computer’s dataset. For 40 years I have contended that computers cheat at chess, because they consult an internal library, while humans are prohibited from consulting a book. Our library just happens to be mostly external—so what! Similarly, the computer in a sense gets to consult an internal board, so why should humans be barred from consulting a board. And limiting computers to the neural network also sounds logically justified. The other ideas, however, seem artificial to me.

Alternatvely, we could just let the human throw one bucket of water on the computer in IRL games!


Great idea! Energy as the grand equalizer.

While we’re equalizing human and computer, I propose that humans be given the same number of bathroom breaks as computers (call it “bathroom break parity”). Only then will we achieve true equality.


Very provocative thoughts - thank you for your intelligent analysis and intuition!

I agree all of these are interesting approches, except for:

Those changes would just be cheating, it seems to me.

Also, on the topic of allowing redos, I think either both human and computer should be allowed redos, or neither. Because of course one thing that you get before a redo is insight into the creative ideas your opponent is working with. If a computer can figure out how brilliant a human move is, and ask for a redo, good for it!


This all seems to be overthinking it to me. Trying to come up with some “fair” rationale for limiting the AI so that people can win. Why do that? Are we trying to establish some sense of human superiority, like “Oh, well, that dumb AI it uses so much power?”

The only meaningful goal would seem to be to have an AI that is challenging but not overwhelming to beat. This type of thing is already done all the time to create AIs that SDKs and DDKs can play against. There are simple techniques already. Why should it be any different for Pros?


Some of the limitations aim at equalizing unfair advantages that the computer has. There was an old Twilight Zone episode about robot boxers. Humans had been banned from fighting them because of the dangers. Now those robot boxers had an obvious unfair advantage. Heck, even weight differences are recognized as unsporting. Similarly, computers have unfair advantages, as The Beginer cogently outlined. In addition, computers are immune from distractions such as noise, headaches, temperature, indigestion, etc. It’s funny how humans would not mind losing a footrace to a horse, or a strength contest with a machine (poor ol’ John Henry and the Jackhammer notwithstanding), but make the contest about the intellect and suddenly all perspective is lost.


My thoughts exactly. As a go player, I am looking forward to the divine move, not the result of a match. From the smiles on the faces on the pros these past few days, I am guessing they didn’t care much about the result either.


What a fanstastic observation! Nailed it!

I’m taking the ideas listed by The Beginner as ways to clarify some game-playing distinctions between humans and computers, and as additional challenges for AI. We want low-power AI, and demonstrating that it works for go would be good. Similarly, we want AI to be able to learn more quickly from a smaller number of examples, to be able to play with less memory, and to figure out when a “redo” would make sense, having seen and understood a clever response by a human, etc.

A related question is the overall effect of super-human go AIs on the human game. We saw in chess that it meant pro chess players had much less of an economic niche as teachers, but it inspired more amateurs to start playing. There continue to be problems with players in chess tournaments finding stealthy ways to get advice from computers, which is really a shame. Some people really are looking for a result, rather than beautiful moves.

1 Like

This discussion is really interesting but from my ddk lover go player beginner and amateur chess player point of view I would appreciate a different evolution in go game matches between all players including AI. In 1997, when kasparov lost I was sad. But the worst was for me that as chess computers use brute calculation force there was no chance for the top players to win in the future, so for amateur the hope does not exist too. While in go, let s figure a brut calculation is possible meaning all the combinations are acessible to the computer then because of a certain amount of handicap stones the game keeps is mystery and it remains a challenge to get this handicap diminishing but finding one of the perfect move in a specific position and I think it will improve human games. In the other hand as I am a Beginner, maybe games with handicap is less interesting for strong player.

Just ban all bots from Go websites. Take back control of the game. Why should freaking computers be allowed to ruin a game which has given interest to so many people for so many years ? Human v Human games only. Nothing else.

1 Like

That will always be a minority opinion, as the bots are very popular opponents on OGS. Just check the running games page from time to time - you will find that the top games almost always involve either Leela or DarkGo.

That being said, AIs are not “in control” of the game. I personally find bots rather boring to play against, and so I don’t. Problem solved.


I think the most exciting prospect is not to hobble the AI for humans to have a fighting chance, but to push AI even further so that it can explain its reasoning, tailor a curriculum of exercises and practice games for individual players, and provide accessible and efficient teaching for anyone who wants it.


You’ve laid out some alternative ways of making it easier for humans to beat computers, but I’d like to hear your thoughts on why you think that handicap is an inferior way of tackling the issue.

Since I orginally posted I’ve had a change in perspective. The reality is human cannot complete with AI. But what is important is that we enjoy something for its own intrinsic “fun” sake. Not because we are “best” at it or better at it than someone or something else. I believe in the age of AI besting humans at everything, the question will no longer be “what do you do best”, or “how good are you”, but that of “what are your true passions, hobbies and interests” based on the metric of intrinsic excitement and intentionality of curiosity, irregardless of skill level or mastery.


I agree that we are experiencing an AI summer right now (with all the accompanying effects).

Still, I cannot agree with most of the other things you said, especially not with this one.

What makes you think this is true? What do you even mean by that? Even if there is a satisfying answer here, why does the program take the credit, not the programmer(s)? Who is really the intelligent agent here? Or to put it differently: why should I treat a program as something that acts/plays games?

I could make a case for a strong AI, that it should be treated as something that acts. But not for something like alphago. And if this is true, then the alphago matches were really between Hassabis and Lee. (I write “Hassabis”, but there are of course many people at Deepmind). They just played with an asymmetric rule set, since everyone assumed that Lee using a computer was against the rules, while it was acceptable for Hassabis.There is nothing astonishing about asymmetric games being unfair. If they returned to symmetric rules and Lee were a competent programmer/engineer, he might very well beat Hassabis.


I took his post and linked essay to mean that he believes AI will eventually become the most powerful learning tool in humanity’s possession. While human/machine competition will eventually become pointless (if it hasn’t already), future “intelligent” machines could usher in a new age of enlightenment and self-actualization for humans to grow and develop their own minds. A change from his original sentiment that AI must be dumbed down for meaningful interplay with a human over the board.

Even though I may see a backhoe at work and remark at the awesome power of the machine itself, that doesn’t diminish the accomplishment of engineers and builders who brought it into existence.

I don’t recall anyone claiming the AI developed itself. Though it was developed to be “self-learning” in an effort to transcend even our own capabilites at programming algorithms for strong play.

Quite true. No sane person insists that their arm must compete with a backhoe for digging holes. But the point of the Lee match was never to be symmetrical. The point was to showcase the power of Deep Mind’s computer program and confirm the strides they had made toward developing “true” AI by imitating an organic neural network.

I don’t think anyone needs to get upset that AlphaGo is admired as an entity of sorts. It has performed tasks that are as-yet unachievable for humans. It made deductions and inferences independently of the programmers who designed its code. That is a huge deal.

1 Like

Very interesting perspective. Thank you. :slight_smile:

1 Like

Interesting discussion.

Have you considered that fact that the people who built Alphago don’t know how it does what it does?

That’s right. They built a learning machine, but they don’t know the actual techniques it learned.

There are structures in the Alphago neural net analgous to the structures in our brain, and in each case we don’t know how they do their job. The Alphago programmers did not create those structures, in the same way our parents did not create them in our brain.

If you think that Hassabis “programmed AlphaGo to beat Lee” you are probably looking at it in the wrong way. Or at least a very contorted way. It would be somewhat similar to saying “Hassabus vs Lee’s parents”.

I think the general population’s understanding of how AI works now has not caught up with how it actually works, especially with the role of the programmer in it.



The only sentence that I was and also will discussing is the one I cited. I took it as his point of departure, marking the cause of a problem, if you will. I am not even asserting its converse, I am just not sure whether I can make sense of it.
It seems to me, that the trouble is with the tacit assumption that current AIs are agents. And if you say

I want to point out that you make the same tacit assumption. Unless of course you meant only that a certain playing strenght is unachievable for humans without the computer in the same sense as flying to the moon is unachievable for humans without a space rocket. The use of present tense suggests otherwise though. This assumption might amount to nothing, but I am not convinced of that, as soon as anyone starts drawing conclusions from it, or as in this case is then able to see AIs as competitors of some sort.

In what sense do we not know what alphago does? We understand the learning algorithm well enough. And this is the only “technique” the program “uses”. That you cannot predict how the minimised cost function is going to look like, is something entirely different. You also cannot predict your calculators calculation without doing the calculation. In principle, you could go ahead and track how the cost function is minimised, that is, minimise it by hand. It is only tedious. This alone is significantly different from a system (like the brain maybe) which we call agent.
By the way, not understanding the inner workings of a system is hardly evidence for its agenthood or intelligence.

I do think that Hassabis programmed alphago to beat Lee. It is just another way of saying that alphagos cost function was minimised, instead of, say, maximised (programmed to lose). Your analogy is nice. But it fails, since Lee’s parents in no way agreed to a match or game of some sort. Which by the way only Hassabis did, and not alphago…

Edit: This is also an answer to GreenAsJades comment, but I don’t know how to mark it as such.