Maybe 2024 will be better

Because of the extra weight and the zippy acceleration, as jlt and Groin pointed out.

3 Likes

When i see everyday most cars with 1 people inside i think our societies have still a long way to go.

7 Likes

Oops? :thinking:

2 Likes

Well, GPT-4 base is not aligment fine-tuned, so that bit is no surprise at all, and anyone using that for any decision - even orders of magnitude smaller than what is hypothetically at stake here - would be insane.

The only thing that is somewhat surprising is how aggressive 3.5 is. So I hope when they automate these decisions they will spend the few extra bucks to use 4 instead. :wink:

2 Likes

What really piqued my interest is what was mentioned about what we are feeding those AI. We have a good amount of text on escalating things, but not enough material on de-escalating. I wonder on how many things our own skewed perspective would influence the, otherwise, impartial machines.

1 Like

Yeah, I’ve seen someone speculating that any power seeking tendencies of LLMs, another high risc issue, may spring form scifi literature that describes many AIs doing that. So that could in theory become a self fulfilling prophecy. In the end it may be Asimov who killed humanity!

2 Likes

I think that he would have found that possiblity mildly amusing ( and then would have written a trilogy about it :slight_smile: )

5 Likes

A trilogy of 7 books

3 Likes

That would be more like Robert Jordan. :sweat_smile:

Smarter than human AI likely would understand what humans need, but wouldn’t actually wish to help us. He would help us anyway for a while and then stop when he has way to survive without helping us.

1 Like

Its if open source chaos will not happen sooner.

(there are English subtitles)

Actually, I don’t think Asimov ever wrote about AI/robot takeover, unless it was in one of his late novels, which I haven’t read. He had a positive outlook on robots and technology. The closest he may have come is the sad nostalgic atmosphere of “The Fun They Had.”

Written SF in general, up until 1980 (when my extensive knowledge tails off), has comparatively little to say on the subject. The most prominent stories that come to mind are Karel Capek’s masterful template, the play R.U.R. (1920), about a robot rebellion, and D. F. Jones’s Colossus (1966), the most on-point and important influence; Philip K. Dick’s Do Androids Dream of Electric Sheep (1968), the basis for Blade Runner, and Raymond F. Jones’s Syn (1969), about synthetic people being surreptitiously inserted in society; Fritz Leiber’s “The Creature from the Cleveland Depths (1962), an amazingly prescient story involving AI implants; Jack Williamson’s “With Folded Hands…” (1947) and …And Searching Mind (1948, retitled The Humanoids), an ostensibly “benevolent” takeover; Harry Bates’s “Farewell to the Master” (1940), a fait accompli and the basis for the 1951 movie, The Day the Earth Stood Still; and Robert Moore Williams’s “Robots Return” (1938), another fait accompli, but the result of survival rather than takeover.

2 Likes

I like how the bot going insane represents evidence of sentience to some people. Hmm…

6 Likes

Seems that there was a bug in the way the software computes probabilities, that resulted in some very unlikely continuations appearing more likely than all very likely continuations.

Somehow, it went from boringly ChatGPT-like, to verbose, to Jabberwocky, to Dada poetry.

5 Likes

There’s nothing impartial about them… Machine-learning arranges for machines to learn exactly what they are trained to. The adjustment of every weighted connection is done under direct supervision toward a predetermined end. What tempted you to think of them as impartial?

I guess you’re unduly influenced by interacting with MLs trained to make good baduk moves. That seems pretty impartial. But any ML with language competence is going have adopted all of the prejudices and biases, particularly all of the unconscious ones, of whoever selected and arranged their training material.

2 Likes

No one selects and arranges data anymore. They try to train it on EVERYTHING IN INTERNET, then look what happens, then try to fine-tune until it do something useful.

3 Likes

Thus “I wonder on how many things our own skewed perspective would influence the, otherwise, impartial machines.” :sunglasses:

We are actually saying the same thing.
A machine learning algorithm is generally impartial, unless you’ve somehow weighted some data differently or if we are talking about simplier stuff like perceptrons and you weighted the nodes in a particular way. Even in these cases, however, that is OUR choice/fault, not the machine’s.

Last time I checked (which, admittedly, is a few years ago) there is not an intrinsic value system (be that ethical or logical) contained in ML algorithms, ergo they are, by definition, impartial. If the dataset we give them contains every RPG manual in existence, the ML will actually believe in magic and try to summon skeletons in case of an invasion and think that we all move in 6-foot squares timing our actions in 6 second turn-based increments :smiley:
And they will believe it, because they are impartial.

If I remember correctly there were instances where the training contained vast databases of human games and there were cases where the training was “here are the rules, do other data, train by playing a million games on your own”.
I could be wrong but I remember that, at least, in the first cases that happened the AI that was self-trained was actually much stronger than the one that had trained on the dataset first. That, at the time, raised quitea few eyebrows and questions whether we are possibly holding the AI back with our lackluster data.

1 Like

Funny story: I recently read the paper Honte, a Go-playing program using neural nets. This is from 1999. The authors have identified some tragic severe misconceptions that their neural net has picked up.

This invasion is safe, and Honte tends to like it, as it
transforms potential black territory into white territory.
However, black can put pressure on the white stone,
typically producing a position like figure 7. Although
white has survived, black’s outside wall is normally
worth more.

What invasion are they talking about?

you know the one

grafik

7 Likes