Recent comments in /f/Futurology
Ialreadylove_you t1_jdi65oo wrote
Reply to comment by RRoyale57 in ChatGPT Gets Its “Wolfram Superpowers”! by Just-A-Lucky-Guy
Near future
G0-N0G0-GO t1_jdi5do7 wrote
Reply to comment by s1L3nCe_wb in Could AI be the key to overcoming ideological polarization? by s1L3nCe_wb
Well, the motivation and self-awareness required to engage in this key. If an AI can provide that to people who proudly & militantly refuse to do so at this time, that would be wonderful.
But the careful, objective creation & curation of AI models is key.
Though, like our current human behavioral paradigms, the weak link, as well as the greatest opponent to ideological growth, is humanity itself.
That sounds pessimistic, I know, but I agree with you that the effort is an eminently worthwhile pursuit…I just think that AI by itself can only ever be a singular avenue to improving this approach to our existence, among many others. And we haven’t been successful in identifying most of those.
But, again, a good-faith employment of AI to assist individuals in developing critical thinking skills is a worthwhile endeavor. But the results may disappoint, especially in the short term.
s1L3nCe_wb OP t1_jdi522j wrote
Reply to comment by RiverboatTurner in Could AI be the key to overcoming ideological polarization? by s1L3nCe_wb
Yep, that's exactly what I think also.
If something like this is ever developed, I would love to see a documentary about its development 🤓
RiverboatTurner t1_jdi4l0p wrote
Reply to comment by s1L3nCe_wb in Could AI be the key to overcoming ideological polarization? by s1L3nCe_wb
I think the challenge will be finding examples to "program" the model with. Remember, these AI models aren't programmed with rules, they are shown millions of examples of interactions and trained to recognize good ones.
It's very much like training a puppy, you can't just tell it "don't chew on my stuff". You need to keep practicing similar situations over and over and rewarding only the desired behavior. In the end, your puppy usually doesn't chew on your stuff, but you don't know exactly what it's thinking.
The new chat AIs take that model trained on good conversations and essentially keep trying out responses internally until they find one that matches the good detector.
The challenge with your idea would be gathering millions of examples of discourse effective at changing people's minds.
s1L3nCe_wb OP t1_jdi40jf wrote
Reply to comment by kenlasalle in Could AI be the key to overcoming ideological polarization? by s1L3nCe_wb
Hahaha yeah, that is a good summary.
Thank you for sharing your views! Have a good weekend 🙏
snk7111 t1_jdi3nnt wrote
I am naive here. But why does chatgpt need such plugins. Can't it do all the things on its own particularly open web. Eli5 me please.
SpinCharm t1_jdi3lf2 wrote
I tried reading the overwhelmingly long article but after 10 minutes gave up trying to find out where I can actually try it out. They stated that “with ChatGPT and the Wolfram plugin”. Anyone know how to try this?
kenlasalle t1_jdi3jdc wrote
Reply to comment by s1L3nCe_wb in Could AI be the key to overcoming ideological polarization? by s1L3nCe_wb
We're seeing this from two different angles.
What I'm saying is any challenge to a person's worldview, even the most well-thought out a patiently explained argument, is going to be met by resistance because our society does not value flexible thinking.
What's you're saying, if I'm hearing you correctly, is that a competent AI can make an argument that breaks through this inflexibility - and I just don't think that follows.
Again, cynical. I know. But I'm old; I'm supposed to be cynical. That's my job.
But I wish you and your theory all the best.
s1L3nCe_wb OP t1_jdi30x4 wrote
Reply to comment by kenlasalle in Could AI be the key to overcoming ideological polarization? by s1L3nCe_wb
That's precisely why epistemological autoanalysis is essential for growth and human evolution in general. I'm quite certain a sophisticated AI model could help us to get there faster.
LichPhylactery t1_jdi2lw3 wrote
Aren't chatgpt already censored?
Aren't the devs already filter what it can say?
There is no place for discussion where the majority censor or "just" try to hinder everything that doesn't overlap with their belief.
Its like the old times:
"Ohhh, you do not believe in God? There Earth is not flat? BEHEAD HIM! BURN HIM!!!!"
Now, it evolved into calling the opposers nazis/commies. Ban, shadowban, cancelling them....
[deleted] t1_jdi2gck wrote
kenlasalle t1_jdi2ajb wrote
Reply to comment by s1L3nCe_wb in Could AI be the key to overcoming ideological polarization? by s1L3nCe_wb
And yet, they lay at the heart of many of our misunderstandings all the same.
[deleted] t1_jdi1uq5 wrote
[removed]
s1L3nCe_wb OP t1_jdi1u67 wrote
Reply to comment by kenlasalle in Could AI be the key to overcoming ideological polarization? by s1L3nCe_wb
Well, the kind of subjects that I was thinking about are more pragmatic in terms of social interactions. I think that the themes you used as examples can be very interesting but they are not very practical for our day to day interactions.
merien_nl t1_jdi1g05 wrote
No. This is the same thing we thought in the late '90s with the popularisation of the internet. If all the facts are available to everyone it will changes discussions, we will understand each other better, the world will be a better place. It wasn't to be.
Same here, polarization exists it serves a purpose for some. We have created a society where there have to be winners and losers. There is little room for both sides to have valid points. There is little room to agree to disagree.
It is not good for society, but I'm afraid AI is not going to help us here. As much as it could be a positive tool it can also be a negative tool. Generating very convincing arguments for whichever position you want to take.
Semifreak t1_jdi167y wrote
Reply to comment by mrx-ai in Could GNNs be the future of AI? by mrx-ai
Thank you kindly. I now understand diffusion models.
kenlasalle t1_jdi0z0w wrote
Reply to comment by s1L3nCe_wb in Could AI be the key to overcoming ideological polarization? by s1L3nCe_wb
I don't buy it. If you talk about things that people people in firmly - free will, an afterlife, a god - in a non-confrontational manner, they become confrontational. It's not the approach; it's the result.
s1L3nCe_wb OP t1_jdi0e9h wrote
Reply to comment by kenlasalle in Could AI be the key to overcoming ideological polarization? by s1L3nCe_wb
>I'm a bit cynical.
I can see that haha
The reason why many people show a lot of resistance to question their own ideas and open their minds to other viewpoints is that their average interaction with other people is confrontational. When we show genuine interest in understanding the other person's point of view, most of that resistance vanishes and the interaction becomes very beneficial for both parties.
But we are not used to this kind of meaningful interactions and we tend to be very inexperience when we try to have them. That's why I think that having a model like this as an educational application could be very useful.
kenlasalle t1_jdhyi0h wrote
I honestly don't think people will listen to anything that doesn't support their worldview. They won't listen to people and they won't listen to machines. Worldview is tough to shatter when the easy answer, be it religion or politics or Pokemon or Twilight or whatever a person invests their life in, is so tempting. Given the opportunity to think for themselves, people will flee in terror.
I'm a bit cynical.
devi83 t1_jdhy23e wrote
Reply to comment by theglandcanyon in Did Isaac Asimov predict GPT-4? by theglandcanyon
Yeah, that interesting, I wonder why GPT gave such false answers about it, I asked it two different times and got two different answers. It makes me worry about how much more misinformation is being spread because of GPTs confidentially wrong answers.
34twgrevwerg t1_jdhxc1i wrote
No, most people are stupid. AI will lead to more violence.
s1L3nCe_wb OP t1_jdhwpz8 wrote
Reply to comment by resdaz in Could AI be the key to overcoming ideological polarization? by s1L3nCe_wb
>engaging with AI that just sort of agrees with your world view
I don't know if I'm failing to explain my point but I really cannot explain it any better.
Just watch a video of what Peter Boghossian does in these debates and you might get an idea of what I'm talking about. Peter does not "sort of agree" with anyone; he just acts as an agent to help you analyse your own epistemological structure.
s1L3nCe_wb OP t1_jdhw9u6 wrote
Reply to comment by LaRanch in Could AI be the key to overcoming ideological polarization? by s1L3nCe_wb
Sadly, I have to agree with you on that one. My hope is that there is always people who are willing to invest on things that bring a positive change.
[deleted] t1_jdhw6ym wrote
[removed]
s1L3nCe_wb OP t1_jdi67mc wrote
Reply to comment by G0-N0G0-GO in Could AI be the key to overcoming ideological polarization? by s1L3nCe_wb
Great input. Thanks for sharing it 🙏