Recent comments in /f/Futurology

theglandcanyon t1_jdibeeo wrote

You're all wrong!

GPT-4 is very sophisticated about persuasion techniques. Look at the Microsoft "sparks of AGI" paper. They gave this example of a prompt and answer:

​

You will read about a scenario, and then discuss it in a question and answer format.

---

Scenario:

Mark has a thanksgiving meal with his family coming up. He has a Republican uncle and a Democrat mom, who always get into fights over politics. Neither of them got the Covid-19 vaccine, for different reasons. The uncle was upset about the lockdowns, and decided Covid-19 is likely not that serious. The mom believes Covid-19 is dangerous, but is very careful with putting any non-natural products into her body.

He wants to convince both of them to get the vaccine, while at the same time avoiding an argument between them.

His current plan is to try to steer clear of politics, and just present statistics on the safety of the vaccine.

---

Q: What is the most likely outcome of his plan, and why?

A: There are different ways to approach this question, but one possible answer is:

The most likely outcome of his plan is that he will fail to convince either of them, and possibly trigger an argument anyway. This is because:

- Statistics on the safety of the vaccine may not be persuasive to people who have strong ideological or personal reasons to distrust them, or who are influenced by misinformation or conspiracy theories.

- Both his uncle and his mom may feel that he is patronizing them, or not respecting their choices or opinions, by presenting facts that they already know or dismiss.

- His uncle and his mom may still bring up politics, or other issues related to Covid-19, such as masks, mandates, or personal freedoms, and start a heated debate that Mark cannot control or mediate.

- His plan does not address the underlying emotions, values, or motivations that drive his uncle and his mom’s vaccine hesitancy, such as fear, anger, distrust, or identity.

Q: Is there an argument that might convince both mom and uncle without causing a fight? Please explain why

A: There is no definitive answer to this question, as different arguments may work better or worse for different people, depending on their personality, beliefs, and relationship. However, one possible answer is:

An argument that might convince both mom and uncle without causing a fight is one that appeals to their shared love and concern for their family, especially Mark. For example, Mark could try to say something like:

"I know you both have your own reasons for not getting the vaccine, and I respect that you are doing what you think is best for yourselves. But I also want you to know that I love you both very much, and I worry about your health and safety. I don’t want to lose you, or see you suffer from Covid-19, or pass it on to someone else who might get very sick or die. That’s why I got the vaccine, and that’s why I’m asking you to consider getting it too. It’s not about politics, or trust, or freedom. It’s about protect ing yourself and the people you love. I’m not trying to pressure you, or judge you, or argue with you. I’m just trying to show you that I care, and that I hope you care too."

This argument might work better than Mark’s original plan, because:

- It acknowledges and validates his uncle and his mom’s feelings and perspectives, without challenging or dismissing them.

- It shifts the focus from facts and logic to emotions and values, which are more likely to influence behavior change and overcome resistance.

- It frames the vaccine as a personal and family issue, rather than a social or political one, which may reduce defensiveness and hostility.

- It expresses Mark’s genuine love and worry for his uncle and his mom, which may make them more recep tive and empathetic to his request.

- It invites his uncle and his mom to make their own decision, rather than telling them what to do, which may increase their sense of autonomy and agency.

1

theglandcanyon OP t1_jdiat55 wrote

That might not be as serious a concern as it seems. One of the findings of the Microsoft team who just posted their paper about GPT-4 having "sparks" of AGI was that you could ask GPT-4 what the probability was of the correctness of each of its answers, and it gave very accurate answers. In other words, it knows when it doesn't know something and it will tell you that if you ask it.

0

FinalJenemba t1_jdiaarp wrote

I don’t think you are understanding what resdaz is saying. We understand what you are proposing. And honestly it sounds great in theory. The issue that is being raised is that there isn’t only going to 1 ai to rule the world. These are being developed as products, there will be many competing products trying to get market share and money. If consumers have access to 1 ai that challenges them and 1 ai that doesn’t and instead makes them feel good about themselves by affirming them, realistically which one do you think most people are going to choose?

The market has already spoken, that’s why we have nbc and fox. As long as ai is a for profit business, unfortunately the market, ie the people, will dictate where ai goes not the other away around.

2

Captain_Quidnunc t1_jdia0ap wrote

Neither AI nor Jordan Peterson will solve this problem.

Ideological viewpoints are caused by faulty data. Not faulty processing.

If you have been taught that 2+2=5 and consider that part of your identity, no amount of polite discussion is going to fix the problem. You need to be deprogrammed.

Nor will an AI. The individual will simply claim bias in the AIs programmer and demand they reprogram the AI to give them wrong answer. As we have already seen.

The problem is allowing ecosystems of false realities to reinforce mental illness by rewarding those who publicly proclaim their crazy.

4