Recent comments in /f/Futurology

KnightOfNothing t1_jd5kw1q wrote

whatever man the facts are human civilization is evolving way faster than the human body is, you've already got an obesity epidemic and diseases like Crohns that are getting worse and more widespread. It's only a matter of time before more negative effects pop up maybe these will finally be so bad you can't pretend it's not happening anymore.

Genome editing can fix such things but people including yourself apparently are too terrified of it's abuse to let it do anything.

1

pepepeoepepepeoeoe t1_jd5kvws wrote

Keep in mind AGI doesn’t necessarily imply it’s just a super smart digital human, but a program that can perform any task a human can at least as well or better. I’m not saying it wont be conscious or be able to “think for itself” but it’s definitely possible it won’t, since it’s not necessary.

1

TemetN t1_jd5jmsq wrote

Top of my head? Apart from that, the other two big ones are the argument that the rate of progress is exponential in general, and AI's integration will further improve it. And Vinge's superhuman agents idea, which posits that we can't predict the results of AI R&D once it hits the point of being beyond human capabilities.

I tend to think that either of those is more likely (or rather, the first is inevitable and the second is hard to predict), and that we're in the runup to a soft takeoff now.

1

m-s-c-s t1_jd5emdp wrote

Man, I'm not sure why you're thanking me. These are your sources. Paper written in 2022, and has an excellent summary.

That said, it is a wonderful example that António Guterres is correctly echoing the sentiment of the reports.

Here's some of the detail you missed:

From Page 116:

> Latin America: "5.8 million people pushed to extreme poverty by 2030 (7; 11)"

That's 7 years from now, but who's counting?

> Worldwide: "Global GDP losses of 10–23% by 2100 due to temperature impacts alone (3; 12; 13)"

Note that they didn't say "lack of growth," they said "losses."

Or look at the map on page 81, where it shows the number of people who will be displaced by more severe costal flooding. Tens of millions of people in India by 2040.

Also take a look at page 80, where substantial portions of the world will be at risk of death from heat and humidity. It's literally a map of where it will be effectively uninhabitable because there will not be a single day in the year where it's safe to go outside. It will literally be too hot to live there.

Another problem would be the wildfires,

> "At a global warming of 2°C with associated changes in precipitation global land area burned by wildfire is projected to increase by 35% (medium confidence)." Page 55.

or as you put it: "bUrNing". Actually, you also claimed they didn't use the word "catastrophe", but a conjugation of it shows up 3 times in your source.

> Page 45: "Climate-induced extinctions, including mass extinctions, are common in the palaeo record, underlining the potential of climate change to have catastrophic impacts on species and ecosystems (high confidence)."

> Page 50: "Between 1970 and 2019, drought-related disaster events worldwide caused billions of dollars in economic damages (medium confidence). Drylands are particularly exposed to climate change related droughts (high confidence). Recent heavy rainfall events that have led to catastrophic flooding were made more likely by anthropogenic climate change (high confidence). Observed mortality and losses due to floods and droughts are much greater in regions with high vulnerability and vulnerable populations such as the poor, women, children, Indigenous Peoples and the elderly due to historical, political and socioeconomic inequities (high confidence)."

Note that they used the past tense there, as in catastrophic impact has already occurred.

> Page 87: "Restoration of ecosystems in catchments can also support water supplies during periods of variable rainfall and maintain water quality and, combined with inclusive water regimes that overcome social inequalities, provide disaster risk reduction and sustainable development (high confidence). Restoring natural vegetation cover and wildfire regimes can reduce risks to people from catastrophic fires."

Note here that they use both the things you complained about, catastrophe and burning.

Like look man, I can't help but think you still aren't reading these since they directly contradict your thesis.

1

nova_demosthenes t1_jd5co5y wrote

Software architects design software or modifications into "chunks" that perform simple operations. Since many of those chunks have established "convention," they are autogenerated.

The newer parts are then built synthetically by AI by scanning countless samples, interpreting them down to sub-components, and stitching together a new piece of software that's a reasonable approximation of what the chunk is described to need to do in human language.

Your software engineers then review and verify the code.

So it's incredibly quick iterations.

3

TemetN t1_jd5c7lt wrote

This seems to imply some sort of foom if I'm reading it right, in which case alignment would be the only really significant thing you could do in preparation, besides ensuring living that long. Honestly, I tend to consider this the least probable of the major proposed runups to the singularity given the number of potential bottlenecks and the current focus of research.

​

On the plus side, if aligned then foom would also likely deliver by far the fastest results - with the world effectively revolutionized overnight.

5

NinjaMoreLikeANonja t1_jd590gl wrote

100% correct. Think about it like this- two objects are in orbit around the Earth, each moving at 17,000+ miles per hour depending on how high the orbit is, and those two objects must touch. In the worst case velocity scenario, the two objects are counter-rotating in the same orbit so closing speed is 34,000+ mph. In the worst case positioning scenario, one object is orbiting along the Equator, and the other is orbiting over the Poles. The two satellites must hit one another- without destroying either satellite- at one particular point in space. Not. Gonna. Happen. The amount of propellant required to make that kind of shift would be greater than the mass of both satellites combined. Cool in theory, and maybe possible one day if there are a shitload of janitor satellites up in a bunch of orbits around the Earth, but extraordinarily hard to do in practice.

2

NinjaMoreLikeANonja t1_jd582os wrote

Changing trajectory means changing orbit and changing orbit takes energy. The presumed (and soon to be legally mandated) end of life goal of all smallsats and cubesats is to burn up in the atmosphere. You can do that by reserving a last gasp of propellant on the satellite to lower the satellite's orbit, but that assumes that there is a thruster on board somewhere. A lot of small satellites don't have thrusters. The drag sail approach is nice because it's passive, and all the energy required is collected from atmospheric impact rather than stored on the satellite as a propellant of some kind.

1

BigDipper097 t1_jd56zv5 wrote

I’ve seen a lot of people say automation will replace writers, but I think creative nonfiction as a genre is safe. There will always be demand for memoirs, testimonials, and stand up comedy, which are all focused on what individuals observe in day to day life.

I think a lot of pulpy genre fiction will be replaced by AI generated work because such work depends so much on formula. More “serious” literature—the kind of fiction and essays produced by the Albert Camuses, cormac McCarthys, and Gabriel Garcia Marquezs of the world—won’t be supplanted by AI generated texts because so much of it is personal, and so much of the discussion around their works analyzes their psychology.

I’d rather read a literary novel about a kid growing up in 2000s American suburbia by someone who actually experienced it than an AI.

Which isn’t to say that AI generated serious “literature” would suck, just that humans will always want to hear other humans’ takes on what it means to be human.

1

awcomix OP t1_jd4rgz6 wrote

Hopefully this post is ok here and not better suited as a writing prompt. But i feel like we had better psychologically start preparing for it.

I’ll start my own guess:

When it happens we will tirelessly argue and debate about the nature of consciousness and what it means to be self aware. Many will believe it’s overblown and machines essentially can’t think for themselves. That consciousness is limited to humans and to a lesser degree some other species. Meanwhile new political factions that believe and support this new sentient AI will emerge. Religious groups will denounce the tech as against god’s will and ban followers from partaking in it in any shape or form. The political parties that support it will try and gain ground demonstrating the validity of the technology and how we can work with it to improve the world. These notions will be largely dismissed, feared and not trusted. The arguments will become moot after a year or two. As it becomes obvious that while we’ve been arguing about the semantics of intelligence, consciousness, and what certain peoples ‘gods’ think, the tech has become to eclipse everything else in our world. It’s now doing things that we can’t even comprehend and communicating in its own language with other systems. This will cause a panic that even the political backers can’t quell. Leading to dramatic and rushed banning of the technology. By this stage it will be too late. Some systems will be shut down but it will live on in smaller and limited systems. After that who knows…

6