How much stronger can AI be?

In terms of more direct existential threats that could be caused by AI, I still think that things like violent uprising or turning everything into paper clips or grey goo are less immediate concerns than simply the likelihood that increased automation leads to more consumption and demand for energy, where we have not solved the problem of the climate crisis caused by how we generate energy.

1 Like

I found a PDF of this online here (I’m guessing that it must be in public domain by now):

I’ll read it later

2 Likes

While you’re absolutely right that a capitalist dystopia is a more immediate concern than AI takeover, I think the paperclip scenario is a valid concern as well. If we manage to create a General Intelligence1, but haven’t solved the "values problem"2 perfectly, then I don’t think we (humans) have a high chance of survival.

1An AI that can perform any human task or thought (but better).
2Solving the values problem means that we have successfully aligned the AI agent’s goals with the values of humanity. This can go wrong in so many ways.

2 Likes

Bot can beat very experienced player with 9 stones handicap. We may think that everything is under control when in fact AI is 10 stones stronger and then it would be too late.

I’m certainly concerned about the consequences of AGI. I understand these things and they are surely terrifying, but I think that too much focus is put on such fears, even to the detriment of ignoring other more immediate problems caused by much simpler systems that already exist or will soon be deployed.

I’m a bit skeptical of how close we really are to AGI. A lot of people working on and applying AI are not even addressing or advancing the development of general intelligence. On the other hand, the wide-scale deployment of much simpler forms of automation and narrow AI is already in the immediate pipeline, and there will certainly be immediate economic and societal consequences that must be addressed.

5 Likes

economic and societal consequences of narrow AI have nothing to do with danger of general AI

2 Likes

This one was quite interesting and fun to read :slight_smile:

Seems like it was uploaded here scanned (?) http://nob.cs.ucdavis.edu/classes/ecs153-2019-04/readings/computers.pdf

Or here Computers Don't Argue

3 Likes

Yes, exactly!

Hence, it’s erroneous when people apply the fears of general AI (a hypothetical that does not yet exist) to narrow AI systems (which do exist and cause very different problems).

2 Likes

Lmao bots are the worst sandbaggers

I’ve always been very fond of “Computers Don’t Argue,” and my father was a great fan of that story in particular. I found it surprising that it didn’t get nominated for the Hugo, but it may be because his “Soldier, Ask Not” (later expanded into a novel) won in the short fiction category in 1965. Like the Oscars, the Hugos tended to spread the appreciation around, until Harlan Ellison started winning year after year.

3 Likes

I can imagine a general AI is a lot better in extracting our values from our writings than we would be ourselves…

Apart from that, our values are shifting all over the place through time as well.

3 Likes

Being able to understand our values and actually adopting them are quite different things!

4 Likes

From: Saturday Morning Breakfast Cereal - AI

4 Likes

Just wait until we get “strategic” AI. AI that will analyze a player’s game history and develop a custom board strategy that works against an opponent’s preferred playing style.

2 Likes
3 Likes

UPDATE: by Zermelo’s theorem text below is not true, but it shows why its not easy.


What if perfect play doesn’t exist and perfect komi makes no sense? :thinking:

Crazy thought experiment:

(for simplicity) imagine that seki not possible(or all neutral points count as white) and black statistically doesn’t has big advantage for some reason in human games of that alternative world
so every point will be black or white in the end, no neutral. And there is no komi.
(Chinese rules with unknown modification)
There are 361 points on the board
black needs 181 area to win
or white needs 181 area to win

Many years ago they made magic computer that can conquer area 100 for black for any possible moves of opponent. Every white move was simulated by magic power. So there is proof that black can conquer area 100 anyway.
There are also many different ways to conquer area 100.

And now they made artificial black player that can conquer 180 area anyway.
And they made artificial white player that can conquer 180 area anyway.
They try to conquer area 181 only if they have enough magic power to proof that they can do it anyway from current position.
When they play each other, sometimes black wins, sometimes white wins. Strategy of conquering area 180 is not perfect for conquering area 181 later. So in some branches of conquering area 180, black has advantage(for conquering area 181 later), in some branches white has such advantage.

Question:
Why do player that can anyway conquer 181 area is necessarily possible?

Do it has something to do with the fact that it is perfect information game?
(so if there are some hidden moves in the beginning, then area 181 player is impossible and if there are no hidden moves, then area 181 player is always possible - why? How does it work?)

Maximum of territory that always possible to conquer is surely exists. But why it can’t be less than 181 for both - for white and for black?
In strategy for conquering 200 area there are always some branches that leads to fail. Why same can’t be true for area 181?

If what I told is true, then perfect komi concept is flawed:
when one of players (black or white) reach territory 180, game should be concluded as “draw”, because sometimes there is nothing that that player could do to eventually reach area 181. Maybe there are more winning black branches than winning white branches but single komi will not fix it, because after some openings black will need it instead of white.

So 2+2 is harder than it looks on first glance.


(in above example nothing contradicts standard game theory. I don’t mean that “perfect play” doesn’t exist The speech above is about - that we maybe usually call “perfect play” wrong Go thing, we should call different Go thing “perfect play”)

2 Likes

In short, yes. This is known as Zermelo’s theorem. I could explain why it’s true, since it’s not a very hard proof, later, but I’m now on my phone while travelling, so perhaps later.

1 Like

So I guess one needs to use something like Chinese rules and or rules with a superko rule for zermelos theorem to apply, since one can either have a draw/no result by things like triple ko in Japanese rules or just an infinite non-ending game which is pretty much a draw.

I can see that Wikipedia says it “allows” for infinite games, but it’s hard to imagine one player “winning” (in go in particular) if a game is infinitely long.

Maybe even Chinese superko rules in practice are a bit messy so one probably has to just think of theoretical superko, so that board position is never allowed to repeat for example, so eg positional superko.

I guess since the number of board configurations is large but finite, the game can’t be infinitely long, at most it can hit every configuration once although that would be a weird game.

Imagine a theoretical go game where every single board position somehow arose and at the end the only move available to both players was a pass. The end result of such a game might be very hard to score - it’s not even clear if any modern scoring rules would actually be able to score the game.

Such a game might not exist of course and is clearly nothing like perfect play, since it will likely involve players filling in their own eye space to clear the board.

2 Likes

With Tromp-Taylor rules you should be able to score any position.

2 Likes

Oh right yes this idea of reaching something of only one color.

Yes that seems like it would assign a score to more or less anything.