Recent comments in /f/Futurology

Silly-Barracuda-2729 t1_jd82yc8 wrote

I think the endgame for society will be when we evolve beyond the bounds of our universe. I believe in the concept of infinity and a reality, so I’m sure there will always exists some form of community in one way or another. Possibly as a type 7 being on the Kardashev scale.

1

OriVerda t1_jd82v9c wrote

Right so in Stark Trek they've reached a post-scarcity, utopian society where one of three things happen to future humanity.

  1. They hop on a starship and colonize a planet so they can do honest work as pioneers, working relatively primitively is fulfilling and enriches their lives.

  2. They hop on a starship and explore a vast universe, discovering things and making technological breakthroughs in travel, robotics, holographics, artificial intelligence. Invention for the betterment of all and invention for the sake of invention.

  3. They dedicate themselves to the arts. Poetry, painting, sculptures and so on. Again, to enrich their lives.

7

oferchrissake t1_jd82f0q wrote

Good answer.

Given how much creative energy sci fi has put into this question, there’s probably a lot of material available to speak to this.

Dan Simmons’ Hyperion cycle addresses this extensively as well, and Peter F Hamilton’s Void series offers another take.

These authors all really dig into what we could do with AI as powerful tools, and examines what might happen if they analyze humanity and decide we’re not their kind of people.

2

OpenlyFallible OP t1_jd82282 wrote

Submisson statement - The increasing use of Artificial Intelligence (AI) poses a range of dangers that require attention. One significant danger is that AI systems may perpetuate or even amplify biases and prejudices that exist in society, leading to discriminatory outcomes. Another risk is the potential loss of jobs as AI systems become increasingly capable of performing tasks previously performed by humans. Additionally, there is a risk of accidents or errors caused by the complexity of AI systems, which could lead to catastrophic consequences. Finally, the deployment of autonomous weapons systems using AI could lead to unpredictable and uncontrollable behavior with potentially devastating effects. These risks highlight the need for careful consideration of the development and deployment of AI systems, including ethical and regulatory frameworks to minimize the risks and ensure their responsible use.

9

tiopepe002 OP t1_jd81m8w wrote

Are you really sure about that?

Are you super certain that our intellectual procedures are really above of what even 200 further years or AI advancement can achieve?

In my question, I didn't specify a timeline, because I have no idea of one. But however long it takes for AI to achieve the impossible, that's where I want you to go. :)

3

__The__Anomaly__ t1_jd80xku wrote

Have you ever read The Culture series by Ian M. Banks?

If you like sci fi give it a read, there's some eye-opening ideas there about what the endgame for society with highly advanced AI will be.

Advanced AI will be able to do almost anything that humans don't want to do and do it much better than humans, so this will usher in an advanced post-scarcity age which allows for much less restrictive legal and societal structures.

13

DisasterousGiraffe OP t1_jd7z3sc wrote

"Solar, according to the IPCC report, can deliver more emission cuts than any other technology by 2030, when the world needs to have cut its emissions by at least half if it is to have any chance of capping average global warming at 1.5°C. Solar and wind together offer nearly ten times the emission cut potential than nuclear, and 20 times that of carbon capture."

22

MuForceShoelace t1_jd7yucl wrote

Mostly not like that. an AI like chatGPT isn't a guy who reads manuals then learns from them and then can answer questions. So training it on the manuals wouldn't have a lot of responses past what a search of the documents would do, but worse.

What it really would need is to train on like, message boards where people answered those questions again and again, until it could predict what answer goes with what sort of question.

(but also, a major flaw in this sort of statistical language generation is they generate sentences and don't really know what anything is. So if you asked the minimum width of some cable or something it would do a great job creating an answer that LOOKED like an answer with some numbers in it, but wouldn't necessarily be anything real. ChatGPT is real bad on like, if you ask for a phone number it'll create a phone looking number that is just random numbers, because that gives a sentence that looks correct)

1