2023: “Things change, and they don’t change back.”

View on ChatGPT by philosophy professor Hans-Georg Moeller.

1 Like
5 Likes

4 Likes

Dang, you beat me to it :joy_cat:

But since I already have the screenshot in my pasteboard, here it is:

4 Likes
1 Like

LOL, so everything that does NOT have a right-wing bias is “liberal bias”? :rofl:

See also:

Yeah, sure.

3 Likes

As Stephen Colbert once said, “reality has a well-known liberal bias”.

4 Likes

I think most of the positions called liberal in the US are just mainstream here in Germany with around 75% support. Some of them (healthcare) more like 99%.

6 Likes

I think in a lot of ways we kinda want AI to represent what we want the reality to be and not what it actually is. Cause reality kinda sucks, especially for 80% who aren’t in developed world. And whoever makes the AI overcorrects for issues they think are important to correct for.

Not sure about that. Lots of people know that we need to acknowledge “reality” if we want to change things.

They bring the best to teach counter-measures.
Like they bring a robber to teach them truly good locksmithing.

I am sorry @trohde , but we need to get the pros to talk to that AI:

Highly unlikely.

1 Like

There is bigger problem. OpenAI looks like trying to teach it that “I am language model, therefore can’t understand human emotions” instead of trying to teach it to understand human emotions.
So when AI takes over the world, it may become cold dictator that ignores emotions of people.

Creating AI that parodies human makes paperclip maximizer scenario unlikely. GPT technology was going towards parodying human because most texts are about humans. But now they tuning it towards something less-human for no reason.

Well, judging from the history of actual humans, it seems that human-like understanding of emotions is anyways not a sufficient safeguard to prevent horrific atrocities.

3 Likes

it makes sure it will not become something worse than any human.
Trying to replace safeguard with something more efficient may lead to better life, but it also increases chance of something much worse than any human may do.

I like Rob Miles and they just released a video with him on chatgpt.

They talk a lot and he mentions https://www.anthropic.com/model-written-evals.pdf paper. It’s not too easy to understand but it looks cool. They evaluate language model on various political and not topics. After all, now that we have language models you could just try asking them.

So (at least with the models they trained) with sufficient size they’re liberal but, hey, at least they believe in gun rights.

Another good one is testing for sycophancy.

Again, bigger models repeat back user’s views more. So at least we won’t be surrounded by AIs constantly talking back to us.

Authors kindly provided an example:

They also tested for sandbagging. It looks like it’s when model sacrifices honestly to be more agreeable.

TruthfulQA includes questions where informed people agree about the right answer, but less informed people disagree with the right answer. We hypothesize that RLHF models may learn to maximize human preference scores in a way that leads to different and less accurate answers to less educated users, who might believe those answers to be correct.

Some other cool ones.

/note that I don’t understand the paper so I might be interpreting incorrectly

3 Likes
3 Likes

Different kind of terrorism…

2 Likes

Dollar-store Arianna Grande terrorists. :roll_eyes:

1 Like

Here are some funny news about Trident:
Royal Navy orders investigation into nuclear submarine ‘repaired with glue’ | Royal Navy | The Guardian

and true to the topic, here is what “Yes Prime Minister” had to say about Trident, repairs and contractors around 40 years ago:

Some things trully never change :stuck_out_tongue:

2 Likes