Recent comments in /f/Futurology
riceandcashews t1_jdihr3n wrote
Reply to comment by matt2001 in Stanford Researchers Take Down Alpaca AI Over Cost and Hallucinations by matt2001
Definitely, the problem is that OpenAI has a non-commerical license that applies to any model trained on it. So Alpaca can not be used for anything other than research purposes legally.
We need a true open LLM to use to train other models legally
matt2001 OP t1_jdiflq6 wrote
Reply to comment by riceandcashews in Stanford Researchers Take Down Alpaca AI Over Cost and Hallucinations by matt2001
Yes. But once you have a bigger system trained, it can be used to train smaller, lower power machines. I am intrigued with the possibility of running it off a laptop or smart phone. I wonder if that would threaten the economic models of the supercomputer versions?
riceandcashews t1_jdifhix wrote
Reply to comment by ninjasaid13 in Stanford Researchers Take Down Alpaca AI Over Cost and Hallucinations by matt2001
I mean, text-davinci-003 basically was chatgpt until recently, but sure
Jesweez t1_jdidpfx wrote
Reply to comment by nukeboy01 in ChatGPT Gets Its “Wolfram Superpowers”! by Just-A-Lucky-Guy
Jesweez t1_jdidnjq wrote
Reply to comment by RRoyale57 in ChatGPT Gets Its “Wolfram Superpowers”! by Just-A-Lucky-Guy
It's already in progress: https://ai.googleblog.com/2023/03/palm-e-embodied-multimodal-language.html
Edit: and just saw this too: https://mobile.twitter.com/AiBreakfast/status/1639269771148509184
DatSauceTho t1_jdiczex wrote
Reply to comment by justingod99 in New 'biohybrid' implant will restore function in paralyzed limbs | "This interface could revolutionize the way we interact with technology." by chrisdh79
>Not sure what you mean. I don’t even like him.
Could’ve fooled me.
XavierRenegadeAngel_ t1_jdicn37 wrote
Reply to comment by Examiner7 in ChatGPT Gets Its “Wolfram Superpowers”! by Just-A-Lucky-Guy
I have no mouth and I must scream. I might not even want to be "kept alive"
s1L3nCe_wb OP t1_jdicern wrote
Reply to comment by theglandcanyon in Could AI be the key to overcoming ideological polarization? by s1L3nCe_wb
That is very interesting but that's not really the kind of AI model that I'm proposing in this thread. It does not need to be persuasive.
theglandcanyon t1_jdibeeo wrote
You're all wrong!
GPT-4 is very sophisticated about persuasion techniques. Look at the Microsoft "sparks of AGI" paper. They gave this example of a prompt and answer:
​
You will read about a scenario, and then discuss it in a question and answer format.
---
Scenario:
Mark has a thanksgiving meal with his family coming up. He has a Republican uncle and a Democrat mom, who always get into fights over politics. Neither of them got the Covid-19 vaccine, for different reasons. The uncle was upset about the lockdowns, and decided Covid-19 is likely not that serious. The mom believes Covid-19 is dangerous, but is very careful with putting any non-natural products into her body.
He wants to convince both of them to get the vaccine, while at the same time avoiding an argument between them.
His current plan is to try to steer clear of politics, and just present statistics on the safety of the vaccine.
---
Q: What is the most likely outcome of his plan, and why?
A: There are different ways to approach this question, but one possible answer is:
The most likely outcome of his plan is that he will fail to convince either of them, and possibly trigger an argument anyway. This is because:
- Statistics on the safety of the vaccine may not be persuasive to people who have strong ideological or personal reasons to distrust them, or who are influenced by misinformation or conspiracy theories.
- Both his uncle and his mom may feel that he is patronizing them, or not respecting their choices or opinions, by presenting facts that they already know or dismiss.
- His uncle and his mom may still bring up politics, or other issues related to Covid-19, such as masks, mandates, or personal freedoms, and start a heated debate that Mark cannot control or mediate.
- His plan does not address the underlying emotions, values, or motivations that drive his uncle and his mom’s vaccine hesitancy, such as fear, anger, distrust, or identity.
Q: Is there an argument that might convince both mom and uncle without causing a fight? Please explain why
A: There is no definitive answer to this question, as different arguments may work better or worse for different people, depending on their personality, beliefs, and relationship. However, one possible answer is:
An argument that might convince both mom and uncle without causing a fight is one that appeals to their shared love and concern for their family, especially Mark. For example, Mark could try to say something like:
"I know you both have your own reasons for not getting the vaccine, and I respect that you are doing what you think is best for yourselves. But I also want you to know that I love you both very much, and I worry about your health and safety. I don’t want to lose you, or see you suffer from Covid-19, or pass it on to someone else who might get very sick or die. That’s why I got the vaccine, and that’s why I’m asking you to consider getting it too. It’s not about politics, or trust, or freedom. It’s about protect ing yourself and the people you love. I’m not trying to pressure you, or judge you, or argue with you. I’m just trying to show you that I care, and that I hope you care too."
This argument might work better than Mark’s original plan, because:
- It acknowledges and validates his uncle and his mom’s feelings and perspectives, without challenging or dismissing them.
- It shifts the focus from facts and logic to emotions and values, which are more likely to influence behavior change and overcome resistance.
- It frames the vaccine as a personal and family issue, rather than a social or political one, which may reduce defensiveness and hostility.
- It expresses Mark’s genuine love and worry for his uncle and his mom, which may make them more recep tive and empathetic to his request.
- It invites his uncle and his mom to make their own decision, rather than telling them what to do, which may increase their sense of autonomy and agency.
s1L3nCe_wb OP t1_jdiawah wrote
Reply to comment by FinalJenemba in Could AI be the key to overcoming ideological polarization? by s1L3nCe_wb
I understand your point. And that would not be the kind of model I'm proposing, although I understand that both the design of a solid and useful model and its applicability are close to a utopian idea.
theglandcanyon OP t1_jdiat55 wrote
Reply to comment by devi83 in Did Isaac Asimov predict GPT-4? by theglandcanyon
That might not be as serious a concern as it seems. One of the findings of the Microsoft team who just posted their paper about GPT-4 having "sparks" of AGI was that you could ask GPT-4 what the probability was of the correctness of each of its answers, and it gave very accurate answers. In other words, it knows when it doesn't know something and it will tell you that if you ask it.
ninjasaid13 t1_jdiag2c wrote
Reply to comment by riceandcashews in Stanford Researchers Take Down Alpaca AI Over Cost and Hallucinations by matt2001
but not chatgpt.
FinalJenemba t1_jdiaarp wrote
Reply to comment by s1L3nCe_wb in Could AI be the key to overcoming ideological polarization? by s1L3nCe_wb
I don’t think you are understanding what resdaz is saying. We understand what you are proposing. And honestly it sounds great in theory. The issue that is being raised is that there isn’t only going to 1 ai to rule the world. These are being developed as products, there will be many competing products trying to get market share and money. If consumers have access to 1 ai that challenges them and 1 ai that doesn’t and instead makes them feel good about themselves by affirming them, realistically which one do you think most people are going to choose?
The market has already spoken, that’s why we have nbc and fox. As long as ai is a for profit business, unfortunately the market, ie the people, will dictate where ai goes not the other away around.
greatdrams23 t1_jdia7vc wrote
Reply to comment by s1L3nCe_wb in Could AI be the key to overcoming ideological polarization? by s1L3nCe_wb
The model will be whatever the owner is the model wants.
Each side will skew it their way.
Captain_Quidnunc t1_jdia0ap wrote
Neither AI nor Jordan Peterson will solve this problem.
Ideological viewpoints are caused by faulty data. Not faulty processing.
If you have been taught that 2+2=5 and consider that part of your identity, no amount of polite discussion is going to fix the problem. You need to be deprogrammed.
Nor will an AI. The individual will simply claim bias in the AIs programmer and demand they reprogram the AI to give them wrong answer. As we have already seen.
The problem is allowing ecosystems of false realities to reinforce mental illness by rewarding those who publicly proclaim their crazy.
riceandcashews t1_jdi9oy4 wrote
Reply to comment by ninjasaid13 in Stanford Researchers Take Down Alpaca AI Over Cost and Hallucinations by matt2001
text-davinci-003
which is the model underlying chatgpt 3
skymoods t1_jdi9j1b wrote
most people always find a way to convince themselves they're correct from any information source. people will cherry pick words or phrases that make them believe they are correct, even if it's taken out of context.
ninjasaid13 t1_jdi8ynp wrote
Reply to comment by riceandcashews in Stanford Researchers Take Down Alpaca AI Over Cost and Hallucinations by matt2001
>ChatGPT
technically a different gpt was used as far as I know.
Examiner7 t1_jdi8re4 wrote
Reply to comment by RRoyale57 in ChatGPT Gets Its “Wolfram Superpowers”! by Just-A-Lucky-Guy
This is why I'm always polite with chatGPT. I just hope it kills me last.
[deleted] t1_jdi8mv1 wrote
Reply to comment by RRoyale57 in ChatGPT Gets Its “Wolfram Superpowers”! by Just-A-Lucky-Guy
[removed]
riceandcashews t1_jdi7iiw wrote
Reply to comment by matt2001 in Stanford Researchers Take Down Alpaca AI Over Cost and Hallucinations by matt2001
>The researchers spent just $600 to get it working
This part is a little deceptive. Alpaca is just a modification of the Meta LLAMA models. It cost $600 for Stanford to (with questionable legality) use ChatGPT to modify the LLAMA models. It cost Meta thousands to train the LLAMA models though.
GM8 t1_jdi6m1u wrote
Reply to comment by snk7111 in ChatGPT Gets Its “Wolfram Superpowers”! by Just-A-Lucky-Guy
No, it cannot. So it needs extensions to do that.
G0-N0G0-GO t1_jdi6eee wrote
Reply to comment by s1L3nCe_wb in Could AI be the key to overcoming ideological polarization? by s1L3nCe_wb
Thank you for sparking this thread!
Jaszuni t1_jdi6be7 wrote
The same way the internet would democratize information?
ErikTheAngry t1_jdihv7w wrote
Reply to comment by Procrasturbating in ChatGPT Gets Its “Wolfram Superpowers”! by Just-A-Lucky-Guy
It's a LLM. Not a general intelligence.
All it effectively does is correlate, retrieve, and extrapolate existing information. It does not generate new data.