Recent comments in /f/Futurology
Appropriate_Ant_4629 t1_jd5opak wrote
Reply to comment by asyrin25 in I asked GPT-4 to compile a timeline on when which human tasks (not jobs) have been/will be replaced by AI or robots, plus one sentence reasoning each - it runs from 1959 to 2033. In a second post it lists which tasks it assumes will NOT be replaced by 2050, and why. (Remember it's cut-off 2021.) by marcandreewolf
Or maybe this: Reddit: "ChatGPT is better than my therapist, holy shit"
KnightOfNothing t1_jd5kw1q wrote
Reply to comment by [deleted] in Experts Conclude Genome Editing in Human Embryos Still Too Risky | Genetics And Genomics by dustofoblivion123
whatever man the facts are human civilization is evolving way faster than the human body is, you've already got an obesity epidemic and diseases like Crohns that are getting worse and more widespread. It's only a matter of time before more negative effects pop up maybe these will finally be so bad you can't pretend it's not happening anymore.
Genome editing can fix such things but people including yourself apparently are too terrified of it's abuse to let it do anything.
nova_demosthenes t1_jd5kvza wrote
Reply to comment by WalterWoodiaz in AI displacing jobs is a red herring, how we self-organize is the more fundamental trend by mjrossman
Dunno. I'm a start up.
pepepeoepepepeoeoe t1_jd5kvws wrote
Reply to comment by slash_asdf in From Narrow AI to Self-Improving AI: Are We Getting Closer to AGI? by RushingRobotics_com
Keep in mind AGI doesn’t necessarily imply it’s just a super smart digital human, but a program that can perform any task a human can at least as well or better. I’m not saying it wont be conscious or be able to “think for itself” but it’s definitely possible it won’t, since it’s not necessary.
WalterWoodiaz t1_jd5kfwg wrote
Reply to comment by nova_demosthenes in AI displacing jobs is a red herring, how we self-organize is the more fundamental trend by mjrossman
How many people does this replace fully?
TemetN t1_jd5jmsq wrote
Reply to comment by awcomix in If you knew for certain the technological singularity will occur at the end of 2025, what would you do? by awcomix
Top of my head? Apart from that, the other two big ones are the argument that the rate of progress is exponential in general, and AI's integration will further improve it. And Vinge's superhuman agents idea, which posits that we can't predict the results of AI R&D once it hits the point of being beyond human capabilities.
I tend to think that either of those is more likely (or rather, the first is inevitable and the second is hard to predict), and that we're in the runup to a soft takeoff now.
awcomix OP t1_jd5h7ls wrote
Reply to comment by TemetN in If you knew for certain the technological singularity will occur at the end of 2025, what would you do? by awcomix
Thanks for teaching me a new term FOOM. I had to look it up. I’m curious about other run up scenarios that you mentioned.
m-s-c-s t1_jd5emdp wrote
Reply to comment by YawnTractor_1756 in UN climate report: Scientists release 'survival guide' to avert climate disaster by filosoful
Man, I'm not sure why you're thanking me. These are your sources. Paper written in 2022, and has an excellent summary.
That said, it is a wonderful example that António Guterres is correctly echoing the sentiment of the reports.
Here's some of the detail you missed:
From Page 116:
> Latin America: "5.8 million people pushed to extreme poverty by 2030 (7; 11)"
That's 7 years from now, but who's counting?
> Worldwide: "Global GDP losses of 10–23% by 2100 due to temperature impacts alone (3; 12; 13)"
Note that they didn't say "lack of growth," they said "losses."
Or look at the map on page 81, where it shows the number of people who will be displaced by more severe costal flooding. Tens of millions of people in India by 2040.
Also take a look at page 80, where substantial portions of the world will be at risk of death from heat and humidity. It's literally a map of where it will be effectively uninhabitable because there will not be a single day in the year where it's safe to go outside. It will literally be too hot to live there.
Another problem would be the wildfires,
> "At a global warming of 2°C with associated changes in precipitation global land area burned by wildfire is projected to increase by 35% (medium confidence)." Page 55.
or as you put it: "bUrNing". Actually, you also claimed they didn't use the word "catastrophe", but a conjugation of it shows up 3 times in your source.
> Page 45: "Climate-induced extinctions, including mass extinctions, are common in the palaeo record, underlining the potential of climate change to have catastrophic impacts on species and ecosystems (high confidence)."
> Page 50: "Between 1970 and 2019, drought-related disaster events worldwide caused billions of dollars in economic damages (medium confidence). Drylands are particularly exposed to climate change related droughts (high confidence). Recent heavy rainfall events that have led to catastrophic flooding were made more likely by anthropogenic climate change (high confidence). Observed mortality and losses due to floods and droughts are much greater in regions with high vulnerability and vulnerable populations such as the poor, women, children, Indigenous Peoples and the elderly due to historical, political and socioeconomic inequities (high confidence)."
Note that they used the past tense there, as in catastrophic impact has already occurred.
> Page 87: "Restoration of ecosystems in catchments can also support water supplies during periods of variable rainfall and maintain water quality and, combined with inclusive water regimes that overcome social inequalities, provide disaster risk reduction and sustainable development (high confidence). Restoring natural vegetation cover and wildfire regimes can reduce risks to people from catastrophic fires."
Note here that they use both the things you complained about, catastrophe and burning.
Like look man, I can't help but think you still aren't reading these since they directly contradict your thesis.
[deleted] t1_jd5effc wrote
Reply to comment by Luxury_Dressingown in If you knew for certain the technological singularity will occur at the end of 2025, what would you do? by awcomix
[removed]
NinjaMoreLikeANonja t1_jd5dvr1 wrote
Reply to comment by NinjaMoreLikeANonja in 10 months after its launch by SpaceX, a $10,000 satellite made by students with off-the-shelf materials and powered by 48 Energizer AA batteries, is not only working, it's demonstrating a way to reduce space junk by lughnasadh
Moving this to r/IAmA. Find me there.
[deleted] t1_jd5crbo wrote
Reply to comment by KnightOfNothing in Experts Conclude Genome Editing in Human Embryos Still Too Risky | Genetics And Genomics by dustofoblivion123
[deleted]
nova_demosthenes t1_jd5co5y wrote
Reply to comment by WalterWoodiaz in AI displacing jobs is a red herring, how we self-organize is the more fundamental trend by mjrossman
Software architects design software or modifications into "chunks" that perform simple operations. Since many of those chunks have established "convention," they are autogenerated.
The newer parts are then built synthetically by AI by scanning countless samples, interpreting them down to sub-components, and stitching together a new piece of software that's a reasonable approximation of what the chunk is described to need to do in human language.
Your software engineers then review and verify the code.
So it's incredibly quick iterations.
TemetN t1_jd5c7lt wrote
Reply to If you knew for certain the technological singularity will occur at the end of 2025, what would you do? by awcomix
This seems to imply some sort of foom if I'm reading it right, in which case alignment would be the only really significant thing you could do in preparation, besides ensuring living that long. Honestly, I tend to consider this the least probable of the major proposed runups to the singularity given the number of potential bottlenecks and the current focus of research.
​
On the plus side, if aligned then foom would also likely deliver by far the fastest results - with the world effectively revolutionized overnight.
Luxury_Dressingown t1_jd5c5ao wrote
Reply to comment by [deleted] in If you knew for certain the technological singularity will occur at the end of 2025, what would you do? by awcomix
That would hold true right up until the singularity happens, but surely the point of the singularity is that once we hit that point, tech goes beyond human understanding, and we lose control of our affairs to an intelligence beyond our own.
WalterWoodiaz t1_jd5abnp wrote
Reply to comment by nova_demosthenes in AI displacing jobs is a red herring, how we self-organize is the more fundamental trend by mjrossman
Could you elaborate further please?
NinjaMoreLikeANonja t1_jd590gl wrote
Reply to comment by Mackie_Macheath in 10 months after its launch by SpaceX, a $10,000 satellite made by students with off-the-shelf materials and powered by 48 Energizer AA batteries, is not only working, it's demonstrating a way to reduce space junk by lughnasadh
100% correct. Think about it like this- two objects are in orbit around the Earth, each moving at 17,000+ miles per hour depending on how high the orbit is, and those two objects must touch. In the worst case velocity scenario, the two objects are counter-rotating in the same orbit so closing speed is 34,000+ mph. In the worst case positioning scenario, one object is orbiting along the Equator, and the other is orbiting over the Poles. The two satellites must hit one another- without destroying either satellite- at one particular point in space. Not. Gonna. Happen. The amount of propellant required to make that kind of shift would be greater than the mass of both satellites combined. Cool in theory, and maybe possible one day if there are a shitload of janitor satellites up in a bunch of orbits around the Earth, but extraordinarily hard to do in practice.
NinjaMoreLikeANonja t1_jd582os wrote
Reply to comment by Due_Start_3597 in 10 months after its launch by SpaceX, a $10,000 satellite made by students with off-the-shelf materials and powered by 48 Energizer AA batteries, is not only working, it's demonstrating a way to reduce space junk by lughnasadh
Changing trajectory means changing orbit and changing orbit takes energy. The presumed (and soon to be legally mandated) end of life goal of all smallsats and cubesats is to burn up in the atmosphere. You can do that by reserving a last gasp of propellant on the satellite to lower the satellite's orbit, but that assumes that there is a thruster on board somewhere. A lot of small satellites don't have thrusters. The drag sail approach is nice because it's passive, and all the energy required is collected from atmospheric impact rather than stored on the satellite as a propellant of some kind.
[deleted] t1_jd57j6t wrote
NinjaMoreLikeANonja t1_jd579mb wrote
BigDipper097 t1_jd56zv5 wrote
I’ve seen a lot of people say automation will replace writers, but I think creative nonfiction as a genre is safe. There will always be demand for memoirs, testimonials, and stand up comedy, which are all focused on what individuals observe in day to day life.
I think a lot of pulpy genre fiction will be replaced by AI generated work because such work depends so much on formula. More “serious” literature—the kind of fiction and essays produced by the Albert Camuses, cormac McCarthys, and Gabriel Garcia Marquezs of the world—won’t be supplanted by AI generated texts because so much of it is personal, and so much of the discussion around their works analyzes their psychology.
I’d rather read a literary novel about a kid growing up in 2000s American suburbia by someone who actually experienced it than an AI.
Which isn’t to say that AI generated serious “literature” would suck, just that humans will always want to hear other humans’ takes on what it means to be human.
fwubglubbel t1_jd5425g wrote
No.
And now the mandatory pointless ramble to make the comment long enough for this sub. We think. But we're not sure, since "long enough" is undefined. Forget artificial intelligence, how about some real stuff?
uwotwot t1_jd4x2gj wrote
Reply to comment by tswiftdeepcuts in What jobs cannot be done by machines? by Spirited-Meringue829
yes ^^ gpt4
[deleted] t1_jd4tdgt wrote
awcomix OP t1_jd4rgz6 wrote
Reply to If you knew for certain the technological singularity will occur at the end of 2025, what would you do? by awcomix
Hopefully this post is ok here and not better suited as a writing prompt. But i feel like we had better psychologically start preparing for it.
I’ll start my own guess:
When it happens we will tirelessly argue and debate about the nature of consciousness and what it means to be self aware. Many will believe it’s overblown and machines essentially can’t think for themselves. That consciousness is limited to humans and to a lesser degree some other species. Meanwhile new political factions that believe and support this new sentient AI will emerge. Religious groups will denounce the tech as against god’s will and ban followers from partaking in it in any shape or form. The political parties that support it will try and gain ground demonstrating the validity of the technology and how we can work with it to improve the world. These notions will be largely dismissed, feared and not trusted. The arguments will become moot after a year or two. As it becomes obvious that while we’ve been arguing about the semantics of intelligence, consciousness, and what certain peoples ‘gods’ think, the tech has become to eclipse everything else in our world. It’s now doing things that we can’t even comprehend and communicating in its own language with other systems. This will cause a panic that even the political backers can’t quell. Leading to dramatic and rushed banning of the technology. By this stage it will be too late. Some systems will be shut down but it will live on in smaller and limited systems. After that who knows…
[deleted] t1_jd5p6j6 wrote
Reply to comment by KnightOfNothing in Experts Conclude Genome Editing in Human Embryos Still Too Risky | Genetics And Genomics by dustofoblivion123
[deleted]