Just thought there may be some interest in seeing what AlphaGo is keeping busy with since retiring from the game.
I feel like sometimes the wording feels like the newspaper equivalent of tabloids.
Today, DeepMind announced that it has seemingly solved one of biologyās outstanding problems:
Oh so protein folding is completely solvedā¦
The limitations of the system arenāt yet clear
But the system clearly performs better than anything thatās come before it, after having more than doubled the performance of the best system in just four years. Even if itās not useful in every circumstance, the advance likely means that the structure of many proteins can now be predicted from nothing more than the DNA sequence of the gene that encodes them, which would mark a major change for biology.
Oh so itās the best we have, but if itās solved the problem it must always get it right right?
For about two-thirds of the proteins it predicted a structure for, it was within the experimental error that youād get if you tried to replicate the structural studies in a lab. Overall, on an evaluation of accuracy that ranges from zero to 100, it averaged a score of 92āagain, the sort of range that youād see if you tried to obtain the structure twice under two different conditions.
I donāt know much about the problem and it sounds very interesting to me. Iād love to have the time to read more about it, although I havenāt studied biology before.
So we go from the problem is solved at the start of the article, to the ācomputational challengeā is solved.
By any reasonable standard, the computational challenge of figuring out a proteinās structure has been solved.
Unfortunately, there are a lot of unreasonable proteins out there. Some immediately get stuck into the membrane; others quickly pick up chemical modifications. Still others require extensive interactions with specialized enzymes that burn energy in order to force other proteins to refold. In all likelihood, AlphaFold will not be able to handle all of these edge cases, and without an academic paper describing the system, the system will take a little whileāand some real-world useāto figure out its limitations. Thatās not to take away from an incredible achievement, just to warn against unreasonable expectations.
So it really isnāt solved, but it is huge progress, especially in the computational effort to getting good results.
I donāt want to be overly critical (even though I sound like I am being so) and itās great to hear updates about these things, and that huge progress is being made. I just wish theyād be more precise when they word things. Why start an article saying the problem may be solved when at the end you concede that it hasnāt been.
It happens a lot with Physics too, thereās a lot of Science-y tabloid-y websites doing articles saying Problem X and Y thatād have plagued physicists for years is solved, when it turns out itās just another idea or theory that isnāt much better and not widely accepted.
Eg
Iāll be honest, I thought this was an origami program and I am mildly disappointed.
I read the news here:
I think that article presented the matter quite well.
I have similar issues although the guardian donāt go back on initially saying the problem is solved.
I donāt really like the phrasing that the problem āstumped researchers for half a centuryā. It does make it sound like the problem was something they didnāt understand as opposed to something just being computationally intractable.
I think the equivalent article about AlphaGo would read something like āDeepmind solves the game of Go, with AlphaGo understanding complex moves that stumped professional players for centuriesā.
I think weād say that AlphaGo hasnāt solved Go in any sense, but it has made significant contributions to our understanding of it.
I wish I wasnāt coming across so negative, itās great that there are articles explaining these things, and that there is great progress in Science to be explained
We just need them to host AlphaGo (original/master/zero) or muzero bots on a server now
I know theyāre busier with more important things, but one can dream
There is no proof that KataGo is worse.
And there is no direct proof that KataGo or other bots are better. Keeping things that way is probably a significant motivating factor for Google to not provide public access to their bots.
The AlphaGo project is also as much about PR and marketing as scientific advancement. For the purpose of publishing papers, it has been sufficient for them to internally self-evaluate their work against earlier versions. While providing more access to (or even fully releasing) their trained models would greatly enhance reproducibility, enable further scientific progress by others, and be of intense interest to the Go community, it also comes with the downside (to Google) of potentially undermining the perceived accomplishments of the project if public evaluation finds weakness and/or unfavorable comparisons to other AI projects.
AlphaGo has been retired from public competition, and I think it is unlikely that it or its successors will come out of retirement for more public games, since that would only carry a PR risk to Google in potentially blemishing its already stunning record by possibly adding an embarrassing loss.
E.g. if AlphaGo loses a mirror go game to a kyu player
Didnāt a professional try to play mirror Go against the Master version, during the sixty-game Tygem / Fox series between the Lee Sedol and Ke Jie matches?
Yes, but unfortunately the pro had black (mirror go is believed to work better with white): AlphaGo vs. The World: Game 51, AlphaGo Master (W) vs. Chou Chun-hsun 9p (B) - YouTube
If black mirrors until the end, white wins by komi. If white mirrors black, itās still black who has to break the symmetry to win, so white has the advantage in mirror go either way.