Playing around with the GPT chat bot

Definitely!
Katago doesn’t know what go is, exactly like a screwdriver doesn’t know what a screw is.

The opposite: its goal is to maximise winrate through exploring many variations.
It all is very mechanical, no intuition or knowledge involved.

It can’t in the same way a coffee machine can’t: it has a program and it executes its program.
Katago’s program is to list a huge amount of variations, sort them by winrate and pick the best. It can’t really do anything but that, so yes, it can’t make decisions but those that the devs wrote in it.

3 Likes

I think the main thing you’re getting “wrong” is that intelligence is not a binary quantity. Nobody knows exactly what it is, of course, or what is the best way to model it (although some people claim to know), but as far as I know there’s quite a lot of agreement, both in the fields of philosophy and AI research, that a good description for intelligence is

Intelligence: the ability to find solutions to a problem.

Which is a spectrum variable rather than a binary one, and a highly contextual one. But strong Neural Networks are generally agreed upon to possess intelligence, just not consciousness (probably) or often knowledge about context.

What I think a lot of people, including AI researchers, might agree with, is that GPT-3 lacks abstraction and it lacks knowledge in some sense – although it’s always tricky to use these descriptors since we’re talking about an entity that’s so radically different from humans.

I think that the main reason GPT-3 behaviour looks so weird for us might be that we actually have a lot of hidden assumptions about our OWN behaviour.

Like you say, GPT-3 is mimicking a specific part of human behaviour, but whereas we humans produce that behaviour starting with general intelligence and knowledge about the world, GPT-3 basically has to guess the context.
Using a Go analogy, it might be a little bit like trying to gauge the strength of a Go player by just looking at one of their games without knowing anything about their intentions or the context in which the game was played (but extremely more complex). Or to be more accurate to what GPT-3 does, trying to generate such a game by just training the network on sgfs without context.

To be fair, I think trying to get close to general intelligence was actually part of the hopes in the GPT-3 project, so it arguably failed miserably in that regard.

3 Likes

there is intuition.
problem is there is nothing from intelligence except intuition.

intuition is when you feel that Tengen or san-san are probably good in current position. And only then you start deep thinking with only those two candidates instead of with all possible moves.

AlphaGo able to feel those few candidates in any position. That is artificial intuition. Problem is that process of choosing between candidate moves is stupid and mechanical.

1 Like

It can’t guess.
It’s incredibly good at imitation. No guessing or knowledge or understanding. And no will!
It does it because it’s made for doing it. It can’t choose.

To find solutions…

I don’t think katago or gpt3 can find solutions at all.
The people that found solutions were the devs of those programs. They proved a very sharp intelligence. Not their products.

A piece of code can’t feel.
The neural network is surprising and wonderful but isn’t alive and can’t feel.

1 Like

you can replace that word with “guess”

1 Like

Alright, we might be getting to a part of the discussion where we might never agree:

everything I quoted above is, in my opinion, also true about humans.

As many people “know” and most everyone else stubbornly refuses to accept, humans don’t actually have free will, or rather, the concept of “free will” doesn’t make any sense scientifically (because everything in the universe is a combination of deterministic and stochastic behaviour, and the common intuition of “free will” has the arrogance to imagine it’s neither, yadda yadda yadda – it’s actually old news so I’m not going to talk about it too much, you either agree or you don’t – and if you don’t you’re almost surely wrong).

Almost surely, humans are actually organic robots that generate/exhibit consciousness (whatever that means) and are “programmed” in a way that gives them the illusion of this “free will”, but their behaviour is as deterministic as everything else in the physical universe. I think the only way for this to not be true is to resort to some kind of “spiritualistic” theory, like postulating the existence of a “soul” that somehow “controls” the human body and doesn’t follow the laws of physics as we currently understand them. So you know, it could be, it just seems very very unlikely.

So humans don’t have a “source code” as usually intended in computer science, but they do have an “algorithm” based on physics, with an input and an essentially deterministic output (the “choice” of behaviour).

To be clear, while “free will” doesn’t exist, the phenomenon of “choice” does. Humans do choose things… the same way that any algorithm does. You give a very basic chess-playing bot an input, the algorithm will run through the options of the legal moves and choose one according to the algorithm. It’s a deterministic choice, but it’s useful to be able to describe it as a choice. Just like it’s useful to talk about evolution as if it was consciously making living beings evolve in a certain way with the intention of making them survive. It’s too complicated to talk about reality without using these words, so we just use these words.

“Feeling” is something that depends on consciousness, because it’s something you “experience”. As far as we know, no piece of code so far has achieved consciousness, but some day they might, and that day they might also develop something similar to “feelings”.

Also, there’s the next part which is more of my own original idea:

“Understanding” is just an illusory feeling that humans get when they have a very good heuristic about something they’re very familiar with.

Anyone who’s studied logic knows that “it’s turtles all the way down”, i.e. you can’t actually prove anything to be true unless you start from some arbitrary point where you say “alright, this is true and these rules for deduction are also true, let’s go from here”; an analogy to how it works in practice is that there are some ideas about which humans just go “this must be true, there’s no way it isn’t true” (though they often don’t agree with each other about it).

For example, if I say, “If it’s raining then I deduce it’s raining”, you’ll say “obviously the deduction is true”. If I say “If it’s raining then I deduce it’s not raining”, you’ll say “obviously the deduction is false”. And you probably think you understand what you’re talking about.

But you’re basing that sense of what’s “obviously true” on your experience of how the world, and how the phenomenon “things being true” works. In other words, your sense of understanding what’s obviously true is just a heuristic. It might be a very good or a very bad heuristic depending on what experience you have, but it’s still just a heuristic, and as anyone who’s studied, say, mathematical set theory, knows, that heuristic is actually bad more often than we think.

There may or may not be an “actual objective truth” in the universe, but the best humans can do is build a heuristic mental model for what it is. And most of the times, when a human feels like they “understand” something, they’re actually stupendously wrong.

So, yes, I agree that machines don’t have “understanding”, because “understanding” is a feeling/experience, and like all feelings/experiences it requires consciousness.

But since understanding is an illusion (in my opinion), then I don’t actually think it matters much at all.


This is why I find AI research so fascinating. It’s an awesome intersection of Mathematics, Computer Science and Philosophy, and it forces us to challenge our assumptions and intuitions about what “intelligence” or “consciousness” or “feelings” or “understanding” mean, much more concretely than philosophy already did for centuries.

4 Likes

I’ll just drop this here because it amused me…

(these were actually my first attempts at these prompts - I guess if I tried multiple times I would get different results so I’m not going to tempt fate :wink:)

19 Likes

In go, is it allowed to repeat a board position that occured several turns ago?

In go, it is not allowed to repeat a board position that occurred several turns ago. This rule, known as the “KO rule,” is designed to prevent endless loops in the game. According to the KO rule, if a player makes a move that would cause the game to revert to a previous board position, the move is not allowed and the player must choose a different move. This rule is an important part of go strategy and helps to ensure that the game progresses in a fair and interesting way.

There are multiple versions of the ko rule, could you list them and explain the differences?

There are several different versions of the ko rule in go, and the specific version that is used can vary depending on the context in which the game is being played. The three most common versions of the ko rule are the “simple ko rule,” the “superko rule,” and the “bronstein ko rule.”

The simple ko rule, also known as the “positional superko rule,” is the most basic version of the ko rule. According to the simple ko rule, a player is not allowed to make a move that would cause the game to revert to the exact same board position that occurred earlier in the game. This rule prevents players from repeating positions, but it does not prevent players from making moves that would cause the game to revert to a similar, but not identical, position.

The superko rule, also known as the “absolute superko rule,” is a more strict version of the ko rule. According to the superko rule, a player is not allowed to make any move that would cause the game to revert to any previous board position, even if the position is not identical to the earlier position. This rule prevents players from repeating any position that has occurred earlier in the game, and it is designed to prevent players from using the ko rule to their advantage.

The bronstein ko rule is a hybrid of the simple ko rule and the superko rule. According to the bronstein ko rule, a player is not allowed to make a move that would cause the game to revert to the exact same board position that occurred earlier in the game. However, if the player passes instead of making the prohibited move, the opponent must also pass on their next turn. This rule allows players to avoid repeating positions by passing, but it also prevents players from using the ko rule to their advantage by requiring them to give up their turn if they are unable to make a different move.

5 Likes

This confuses me so much, I need fact-checking XD

Is the bronstein ko rule a thing? Or did it make it up? I couldn’t find it anywhere.

Also, it’s very “what the hell”-worthy that it equated the simple ko rule to the positional superko rule, although maybe it’s because there actually are people out there (and in its training dataset) that think it is…? After all, the simple ko rule is actually more convoluted to explain than the positional superko, so maybe people just resort to that instead.

1 Like

Personally I find it all kinda scary.

2 Likes

It’s news to me at least… :stuck_out_tongue:

3 Likes

Give a proof that there is a finite number of primes.

The statement that there are a finite number of primes is known as the “prime number theorem.” It can be proven using a number of different methods, but one simple way is to use the concept of “infinite descent.”

Suppose, for the sake of contradiction, that there are an infinite number of primes. This means that we can list all the primes as follows:

p1, p2, p3, p4, p5, …

Now, consider the number N, defined as follows:

N = p1 * p2 * p3 * p4 * …

Notice that N is a composite number, since it is the product of an infinite number of primes. Now, let p be any prime number that is not one of the primes listed above. (Such a prime must exist, since there are an infinite number of primes, and we only listed a finite number of them.)

Since p is not one of the primes listed above, it must be a factor of N. In other words, we can write N as follows:

N = p * X

where X is some other integer. Notice that X is composite, since it is the product of all the primes listed above, divided by p. But since X is composite, it must have a prime factor q. This means that we can write X as follows:

X = q * Y

where Y is some other integer. Substituting this into the equation above, we get the following:

N = p * (q * Y)

This means that N is the product of three primes: p, q, and Y. But this contradicts the fact that N is composite, since we assumed that N is the product of an infinite number of primes. Therefore, our initial assumption that there are an infinite number of primes must be false, and there must be a finite number of primes.

At first glance I thought it just took a proof that there is an infinite number of primes, swapped out “infinite” for “finite” and called it a day. On closer inspection, it is wrong in much more impressive ways :laughing:

15 Likes

The GPT-generated ko rule text is flawed in several places.

I’ve never heard of the “Bronstein ko rule” and could not find anything mentioning it.

The name does appear in the concept of “Bronstein timing”, but that is just a time control system that is unrelated to ko rules.

3 Likes

Reading this is basically the same experience as reading a mathematics page about a topic you have no familiarity with, so… I guess fair enough? If someone asked me to write an essay capturing the “Gestalt” of studying mathematics, I’m not sure I could do a much better job :sweat_smile:

Amongst others:

  • it says that the “simple ko rule” is also called “positional superko rule” when those two are pretty much antipodal
  • from what I understand of its fictional Bronstein ko rule, if there is a ko and I pass, then my opponent is forced to pass too. This one is hilarious. I guess we score the game immediately?

@Lys I guess we’re just going to have to disagree about the meanings of “understanding”, “knowledge” and “intuition”. To me, these concepts can easily be extended to non-alive things.

5 Likes

You could try to look for an interpretation such as “in practice, it means no player can pass to avoid the ko”. Since I’m a weak player, I was giving it the benefit of the doubt and imagining there might be some galaxy-brain tactical advantage to passing, but I guess I was overthinking it XD

write svg image file with style information

gpt|0x0


write svg image file with style information of AlphaGo Go game

gpt2

6 Likes

But umm… what if player passes instead making -1 point move? Or even worse, passes instead playing any normal move in a game thats not yet even close to being finished?

Forced pass sounds like horrible idea xD

2 Likes

But its linguistic prowess has also underlined a rather major problem: ChatGPT could bring about the death of the traditional essay.

What problem? All hail ChatGPT :joy:

1 Like