Recent comments in /f/Futurology

G0-N0G0-GO t1_jdi5do7 wrote

Well, the motivation and self-awareness required to engage in this key. If an AI can provide that to people who proudly & militantly refuse to do so at this time, that would be wonderful.

But the careful, objective creation & curation of AI models is key.

Though, like our current human behavioral paradigms, the weak link, as well as the greatest opponent to ideological growth, is humanity itself.

That sounds pessimistic, I know, but I agree with you that the effort is an eminently worthwhile pursuit…I just think that AI by itself can only ever be a singular avenue to improving this approach to our existence, among many others. And we haven’t been successful in identifying most of those.

But, again, a good-faith employment of AI to assist individuals in developing critical thinking skills is a worthwhile endeavor. But the results may disappoint, especially in the short term.

2

RiverboatTurner t1_jdi4l0p wrote

I think the challenge will be finding examples to "program" the model with. Remember, these AI models aren't programmed with rules, they are shown millions of examples of interactions and trained to recognize good ones.

It's very much like training a puppy, you can't just tell it "don't chew on my stuff". You need to keep practicing similar situations over and over and rewarding only the desired behavior. In the end, your puppy usually doesn't chew on your stuff, but you don't know exactly what it's thinking.

The new chat AIs take that model trained on good conversations and essentially keep trying out responses internally until they find one that matches the good detector.

The challenge with your idea would be gathering millions of examples of discourse effective at changing people's minds.

4

kenlasalle t1_jdi3jdc wrote

We're seeing this from two different angles.

What I'm saying is any challenge to a person's worldview, even the most well-thought out a patiently explained argument, is going to be met by resistance because our society does not value flexible thinking.

What's you're saying, if I'm hearing you correctly, is that a competent AI can make an argument that breaks through this inflexibility - and I just don't think that follows.

Again, cynical. I know. But I'm old; I'm supposed to be cynical. That's my job.

But I wish you and your theory all the best.

2

LichPhylactery t1_jdi2lw3 wrote

Aren't chatgpt already censored?
Aren't the devs already filter what it can say?

There is no place for discussion where the majority censor or "just" try to hinder everything that doesn't overlap with their belief.

Its like the old times:
"Ohhh, you do not believe in God? There Earth is not flat? BEHEAD HIM! BURN HIM!!!!"

Now, it evolved into calling the opposers nazis/commies. Ban, shadowban, cancelling them....

−2

merien_nl t1_jdi1g05 wrote

No. This is the same thing we thought in the late '90s with the popularisation of the internet. If all the facts are available to everyone it will changes discussions, we will understand each other better, the world will be a better place. It wasn't to be.

Same here, polarization exists it serves a purpose for some. We have created a society where there have to be winners and losers. There is little room for both sides to have valid points. There is little room to agree to disagree.

It is not good for society, but I'm afraid AI is not going to help us here. As much as it could be a positive tool it can also be a negative tool. Generating very convincing arguments for whichever position you want to take.

24

s1L3nCe_wb OP t1_jdi0e9h wrote

>I'm a bit cynical.

I can see that haha

The reason why many people show a lot of resistance to question their own ideas and open their minds to other viewpoints is that their average interaction with other people is confrontational. When we show genuine interest in understanding the other person's point of view, most of that resistance vanishes and the interaction becomes very beneficial for both parties.

But we are not used to this kind of meaningful interactions and we tend to be very inexperience when we try to have them. That's why I think that having a model like this as an educational application could be very useful.

0

kenlasalle t1_jdhyi0h wrote

I honestly don't think people will listen to anything that doesn't support their worldview. They won't listen to people and they won't listen to machines. Worldview is tough to shatter when the easy answer, be it religion or politics or Pokemon or Twilight or whatever a person invests their life in, is so tempting. Given the opportunity to think for themselves, people will flee in terror.

I'm a bit cynical.

1

devi83 t1_jdhy23e wrote

Yeah, that interesting, I wonder why GPT gave such false answers about it, I asked it two different times and got two different answers. It makes me worry about how much more misinformation is being spread because of GPTs confidentially wrong answers.

2

s1L3nCe_wb OP t1_jdhwpz8 wrote

>engaging with AI that just sort of agrees with your world view

I don't know if I'm failing to explain my point but I really cannot explain it any better.

Just watch a video of what Peter Boghossian does in these debates and you might get an idea of what I'm talking about. Peter does not "sort of agree" with anyone; he just acts as an agent to help you analyse your own epistemological structure.

1