Just thought there may be some interest in seeing what AlphaGo is keeping busy with since retiring from the game.
I feel like sometimes the wording feels like the newspaper equivalent of tabloids.
Today, DeepMind announced that it has seemingly solved one of biology’s outstanding problems:
Oh so protein folding is completely solved…
The limitations of the system aren’t yet clear
But the system clearly performs better than anything that’s come before it, after having more than doubled the performance of the best system in just four years. Even if it’s not useful in every circumstance, the advance likely means that the structure of many proteins can now be predicted from nothing more than the DNA sequence of the gene that encodes them, which would mark a major change for biology.
Oh so it’s the best we have, but if it’s solved the problem it must always get it right right?
For about two-thirds of the proteins it predicted a structure for, it was within the experimental error that you’d get if you tried to replicate the structural studies in a lab. Overall, on an evaluation of accuracy that ranges from zero to 100, it averaged a score of 92—again, the sort of range that you’d see if you tried to obtain the structure twice under two different conditions.
I don’t know much about the problem and it sounds very interesting to me. I’d love to have the time to read more about it, although I haven’t studied biology before.
So we go from the problem is solved at the start of the article, to the ‘computational challenge’ is solved.
By any reasonable standard, the computational challenge of figuring out a protein’s structure has been solved.
Unfortunately, there are a lot of unreasonable proteins out there. Some immediately get stuck into the membrane; others quickly pick up chemical modifications. Still others require extensive interactions with specialized enzymes that burn energy in order to force other proteins to refold. In all likelihood, AlphaFold will not be able to handle all of these edge cases, and without an academic paper describing the system, the system will take a little while—and some real-world use—to figure out its limitations. That’s not to take away from an incredible achievement, just to warn against unreasonable expectations.
So it really isn’t solved, but it is huge progress, especially in the computational effort to getting good results.
I don’t want to be overly critical (even though I sound like I am being so) and it’s great to hear updates about these things, and that huge progress is being made. I just wish they’d be more precise when they word things. Why start an article saying the problem may be solved when at the end you concede that it hasn’t been.
It happens a lot with Physics too, there’s a lot of Science-y tabloid-y websites doing articles saying Problem X and Y that’d have plagued physicists for years is solved, when it turns out it’s just another idea or theory that isn’t much better and not widely accepted.
I’ll be honest, I thought this was an origami program and I am mildly disappointed.
I read the news here:
I think that article presented the matter quite well.
I have similar issues although the guardian don’t go back on initially saying the problem is solved.
I don’t really like the phrasing that the problem ‘stumped researchers for half a century’. It does make it sound like the problem was something they didn’t understand as opposed to something just being computationally intractable.
I think the equivalent article about AlphaGo would read something like “Deepmind solves the game of Go, with AlphaGo understanding complex moves that stumped professional players for centuries”.
I think we’d say that AlphaGo hasn’t solved Go in any sense, but it has made significant contributions to our understanding of it.
I wish I wasn’t coming across so negative, it’s great that there are articles explaining these things, and that there is great progress in Science to be explained
We just need them to host AlphaGo (original/master/zero) or muzero bots on a server now
I know they’re busier with more important things, but one can dream
There is no proof that KataGo is worse.
And there is no direct proof that KataGo or other bots are better. Keeping things that way is probably a significant motivating factor for Google to not provide public access to their bots.
The AlphaGo project is also as much about PR and marketing as scientific advancement. For the purpose of publishing papers, it has been sufficient for them to internally self-evaluate their work against earlier versions. While providing more access to (or even fully releasing) their trained models would greatly enhance reproducibility, enable further scientific progress by others, and be of intense interest to the Go community, it also comes with the downside (to Google) of potentially undermining the perceived accomplishments of the project if public evaluation finds weakness and/or unfavorable comparisons to other AI projects.
AlphaGo has been retired from public competition, and I think it is unlikely that it or its successors will come out of retirement for more public games, since that would only carry a PR risk to Google in potentially blemishing its already stunning record by possibly adding an embarrassing loss.
E.g. if AlphaGo loses a mirror go game to a kyu player
Didn’t a professional try to play mirror Go against the Master version, during the sixty-game Tygem / Fox series between the Lee Sedol and Ke Jie matches?
Yes, but unfortunately the pro had black (mirror go is believed to work better with white): AlphaGo vs. The World: Game 51, AlphaGo Master (W) vs. Chou Chun-hsun 9p (B) - YouTube
If black mirrors until the end, white wins by komi. If white mirrors black, it’s still black who has to break the symmetry to win, so white has the advantage in mirror go either way.