Recent comments in /f/Futurology
deathlydope t1_jdfvbt0 wrote
Reply to comment by Wurm42 in New 'biohybrid' implant will restore function in paralyzed limbs | "This interface could revolutionize the way we interact with technology." by chrisdh79
reading the article I’m not seeing a projected timeline or any impossible promises, only earnest excitement over the possibilities two or three papers down the line
Skyblacker t1_jdfu1ob wrote
Reply to comment by Artanthos in New 'biohybrid' implant will restore function in paralyzed limbs | "This interface could revolutionize the way we interact with technology." by chrisdh79
I'm still shocked that we got a covid vaccine in less than a year. I thought that sort of thing only happened in movies.
FuturologyBot t1_jdfsg93 wrote
The following submission statement was provided by /u/matt2001:
>Researchers at Stanford University have taken down their short-lived chatbot that harnessed Meta’s LLaMA AI, nicknamed Alpaca AI. The researchers launched Alpaca with a public demo anyone could try last week, but quickly took the model offline thanks to rising costs, safety concerns, and “hallucinations,” which is the word the AI community has settled on for when a chatbot confidently states misinformation, dreaming up a fact that doesn’t exist.
I hope this can be addressed, as it will be able to run on smaller computers.
>Despite its apparent failures, Alpaca has some exciting facets that make the research project interesting. Its low upfront costs are particularly notable. The researchers spent just $600 to get it working, and reportedly ran the AI using low-power machines, including Raspberry Pi computers and even a Pixel 6 smartphone, in contrast to Microsoft’s multimillion-dollar supercomputers.
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1204f8t/stanford_researchers_take_down_alpaca_ai_over/jdfnpeo/
[deleted] t1_jdfpv2h wrote
matt2001 OP t1_jdfnpeo wrote
>Researchers at Stanford University have taken down their short-lived chatbot that harnessed Meta’s LLaMA AI, nicknamed Alpaca AI. The researchers launched Alpaca with a public demo anyone could try last week, but quickly took the model offline thanks to rising costs, safety concerns, and “hallucinations,” which is the word the AI community has settled on for when a chatbot confidently states misinformation, dreaming up a fact that doesn’t exist.
I hope this can be addressed, as it will be able to run on smaller computers.
>Despite its apparent failures, Alpaca has some exciting facets that make the research project interesting. Its low upfront costs are particularly notable. The researchers spent just $600 to get it working, and reportedly ran the AI using low-power machines, including Raspberry Pi computers and even a Pixel 6 smartphone, in contrast to Microsoft’s multimillion-dollar supercomputers.
qj-_-tp t1_jdfn85x wrote
Reply to comment by hippymule in New 'biohybrid' implant will restore function in paralyzed limbs | "This interface could revolutionize the way we interact with technology." by chrisdh79
If you figure out how, let me know.
Important-Ability-56 t1_jdfmiad wrote
I would never put my own name to a revolutionary technology. How embarrassing.
theglandcanyon OP t1_jdfl0w3 wrote
Reply to comment by devi83 in Did Isaac Asimov predict GPT-4? by theglandcanyon
I wonder if there is more than one version of this story? Or more than one story with the same name? The version I'm looking at now bears some resemblance to your summary (Shakespeare is brought to the present and takes a class on himself) but does not have anything about a computer.
Edit: I've now found this story on several websites, and they are all the version I know about, with Shakespeare being brought to the future to take a class on himself, failing it, and being sent back. The end, nothing about a computer of any kind anywhere in it.
Teleseismic_Eyes t1_jdfjz2y wrote
Reply to comment by Borkido in AI creating Games by 2farzzz
AI and wfca are sometimes used in tandem to build realistic conversations. For example, a trained Chat AI can produce a reasonable statement like "The sky is blue. People have blue eyes. Therefore people's eyes are the sky." It's a pretty lackluster line of logic a trained AI proposes but a series of probabilistic adjacency rules such as just one that says "Human anatomy and weather are mutually exclusive" or something to that effect can dramatically clean up the output given by the AI. If you know anything about wfca, this is exactly how the algorithm works where a set of adjacency rules control the final output based on what already exists. Hopefully that clears it up a bit.
TLDR; AI is often very far from perfect. Algorithms like wfca can help get it quite a bit closer.
Praise_AI_Overlords t1_jdfjpl1 wrote
Reply to Did Isaac Asimov predict GPT-4? by theglandcanyon
>Yes, the story you are referring to is called "All the Troubles of the World" and was written by Isaac Asimov in 1958. While the story doesn't specifically predict the development of GPT-4 or any other specific AI language model, it does explore the concept of a highly advanced computer system that is capable of predicting and managing all of humanity's problems. The story is part of Asimov's "Multivac" series of stories, which feature a supercomputer named Multivac that becomes increasingly powerful and influential over time.
34twgrevwerg t1_jdfjni1 wrote
Reply to comment by filosoful in Women are less likely to buy electric vehicles than men. Here’s what’s holding them back. by filosoful
Women have higher net worths than men in the 35-65 range, which is where most people consider luxury cars. This is because divorce favors women, and men die much younger. Rest assured of parity, while there is a wage gap, women get it in the end and are much more influential in the power structure than is readily apparent.
The safety issue is the only reason I can see. Going to a public charge is a big safety risk. Even at high end malls I've seen some sketchy stuff. Women fear violence from men more than getting in a car crash for sure.
This could be irrational, but many gas stations have full service still. I use it myself at night when I can.
astral_crow t1_jdfhyai wrote
I would say wolfram alpha is now reaching its final form. This always always the goal of the system.
Still-WFPB t1_jdfhd4h wrote
Ah wow I was just thinking about this team-up! Neat!
cyankitten t1_jdfcfcj wrote
devi83 t1_jdfc0yd wrote
Reply to comment by theglandcanyon in Did Isaac Asimov predict GPT-4? by theglandcanyon
... Are you sure? In your original post you said:
> There's a certain word in a certain sentence from a certain play (all of which were identified in the story) that he thinks should be different. So he programs a computer to predict the next word from a block of text, then he feeds it all of Shakespeare's work up to the questionable word, and it predicts the word Shakespeare used, not the word the professor thinks he should have used.
And the plot of "The Immortal Bard" literally has that as a plot point:
>After attending the class, Shakespeare admits that he might have made a mistake, but also points out that the professor's interpretation of his work might be wrong. The professor then feeds Shakespeare's entire body of work into a computer and asks it to predict the word in question. The computer agrees with Shakespeare's original choice of words, thus challenging the professor's assumptions about the supposed mistake.
Borkido t1_jdfc038 wrote
Reply to comment by Teleseismic_Eyes in AI creating Games by 2farzzz
Ok i have to admit im not exactly well informed when it comes to the inner working of ai. However saying that because we are using wfca for procedual generation and because ai uses wfca means that ai is already used to generate games is a bit of a stretch.
dgj212 t1_jdfabbz wrote
huh, on one hand I think this is extremely cool, I've actually thought about doing this, but on the other hand, this might actually worsen food safety and create more deceptive commodity that ends up on the market.
For example, creating a crab roll or something crab in it. The filament will taste like crab, the packaging will say it's crab, the filament will even look like crab would if it were processed, and the stuff inside will be organic, but instead of crab, it's made from bugs that have the same flavor profile as crab. Yes bugs are eaten in different cultures and are more carbon zero than live stock, but the point is that i could see bad actors doing this.
Another thing I could see going wrong is that this companies could do what they are already doing, add additives that are addicting to the food, encouraging an unhealthy life style. Not to mention drug dealers could start lacing these so that their clients/prey would be both fed and keep coming back for more. Which would put more emphasis on going organic...god I really hope grocery stores don't use that as an excuse to make organic food even more expensive.
On the bright side, if people want to go vegan, this would be a good gate way to do so.
m-s-c-s t1_jdfa2wr wrote
Reply to comment by YawnTractor_1756 in UN climate report: Scientists release 'survival guide' to avert climate disaster by filosoful
> This BS probably works for you with younger dudes, but I've been in thinking capacity since the time it was called 'global warming', and it has always been about "everybody dies unless we stop fossil in ___ years".
Global warming and climate change refer to different things.
By the way, not everybody dies, just far more people than need to. They literally catalogue how many they anticipate in the source you provided.
> TS.C.6.3 Increased heat-related mortality and morbidity are projected globally (very high confidence). Globally, temperature- related mortality is projected to increase under RCP4.5 to RCP8.5, even with adaptation (very high confidence). Tens of thousands of additional deaths are projected under moderate and high global warming scenarios, particularly in north, west and central Africa, with up to year-round exceedance of deadly heat thresholds by 2100 (RCP8.5) (high agreement, robust evidence). In Melbourne, Sydney and Brisbane, urban heat-related excess deaths are projected to increase by about 300 yr-1 (low emission pathway) to 600 yr-1 (high emission pathway) during 2031–2080 relative to 142yr-1 during 1971–2020 (high confidence). In Europe the number of people at high risk of mortality will triple at 3°C compared to 1.5°C warming, in particular in central and southern Europe and urban areas (high confidence). {6.2.2, 7.3.1, 8.4.5, 9.10.2, Figure 9.32, Figure 9.35, 10.4.7, Figure 10.11, 11.3.6, 11.3.6, Table 11.14, 12.3.4, 12.3.8, Figure 12.6, 13.7.1, Figure 13.23, 14.5.6, 15.3.4, 16.5.2}
See what I mean?
> As far as I'm concerned we're already doing it. And we're already on positive trajectory as compared to those RCP scenarios that were extensively used in 90s and 2000s as mainstream scenarios.
Here's the CO^2 trendline. Where's the dip we'd see if we were doing this action?
Oh, and the 90s and 2000s mainstream scenarios? Here are some examples:
> Schneider’s forecast is considerably more ominous.
> “Six of the warmest years in the last 100 occurred in the ‘80s,” he said recently at a meeting of chemists in Miami. “And I’ll give you odds that the ‘90s will be warmer than the ‘80s.”
When that was published in 1989, we were at 0.27C increase. When the Kyoto Protocol was signed 8 years later in 1997, it was 0.33C.
> "I am a fundamentally optimistic person, but it is getting more and more difficult, because I see the message of science has not fundamentally changed from when I started working in this field, which was 20 years ago," said Thomas Stocker, a professor of climate and environmental physics at the University of Bern in Switzerland.
> Based on two assumptions — that it is not economically feasible for nations to make emissions reductions of more than about 5 percent per year and that increasing carbon dioxide concentrations have a moderate warming effect — he calculates a 2.7 degree Fahrenheit (1.5 degree Celsius) cap on warming, for which island nations vulnerable to rising sea levels have pushed, is already unrealistic. (That cap is often compared to a speed limit for warming; while some consequences — heat waves, species loss and so on — are expected to occur at lesser levels of warming, the repercussions are expected to become more dire as warming increases.)
> Reductions would need to begin by 2027 for the more widely accepted 3.6-degree F (2 degrees C) cap to be achievable, and a 4.5 degree F (2.5 degree C) cap becomes unrealistic after 2040, he calculates.
We were at 0.65C then.
Now, a year after the industrial world shut down to the point that rivers ran clear, it's 0.89C.
[deleted] t1_jdf8vbi wrote
[removed]
MissionDocument6029 t1_jdf7zl8 wrote
Reply to New 'biohybrid' implant will restore function in paralyzed limbs | "This interface could revolutionize the way we interact with technology." by chrisdh79
Creepy in that next take control away from brain and you have troops
[deleted] t1_jdf7xa8 wrote
Reply to comment by Captain_Quidnunc in What are some jobs that AI cannot take? by Draconic_Flame
[removed]
Name_Dudemanbro t1_jdf7w0r wrote
Reply to comment by UpV0tesF0rEvery0ne in Apple gathers over 200 drivers to testing its self-driving car technology by nastratin
When I was there I got to work with some engineers who’d previously been on those teams. The limitless budget thing is definitely not true, and even then I don’t think spending money on R&D for potential new products is ever a bad thing when you can afford it. I’m sure they’re happy with things like the iPad coming from a “pie in the sky” idea.
FuturologyBot t1_jdf7sbf wrote
The following submission statement was provided by /u/Just-A-Lucky-Guy:
Submission statement
> The article announces that ChatGPT, a neural network-based system for generating natural language text, can now use Wolfram|Alpha and Wolfram Language to perform computations and access factual data. The author calls this capability “Wolfram superpowers” and shows some examples of how ChatGPT can answer questions and generate visualizations using these tools. The author also explains some of the technical challenges and opportunities involved in connecting ChatGPT to Wolfram|Alpha and Wolfram Language. He argues that this integration can make ChatGPT more powerful, useful, and trustworthy as a conversational agent. He also speculates about the future possibilities of “ChatGPT + Wolfram” as a platform for creating intelligent applications.
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1200joq/chatgpt_gets_its_wolfram_superpowers/jdf3gte/
theglandcanyon OP t1_jdf7re7 wrote
Reply to comment by devi83 in Did Isaac Asimov predict GPT-4? by theglandcanyon
No, I know that story and it definitely isn't the one I'm thinking of. The story I want specifically deals with text prediction (though probably not with that wording).
Maybe it wasn't even Asimov. I still feel like it was, though.
deathlydope t1_jdfvgyc wrote
Reply to comment by 7andhalf-x-6 in New 'biohybrid' implant will restore function in paralyzed limbs | "This interface could revolutionize the way we interact with technology." by chrisdh79
odds are the military already has a robotic third arm strapped to a chimp somewhere being tested