Recent comments in /f/Futurology

ItsAConspiracy t1_jdv527c wrote

You don't need conscious awareness to beat me in chess, and maybe you don't need it to beat all of us and take everything.

So what worries me is not that we're copyable, but that maybe we're not. We can't prove whether another person or an animal actually experiences qualia. What if the machine doesn't, but wipes out all life anyway? Then the light of consciousness would go out of the world.

1

Longjumping-Tie-7573 t1_jdv3jb6 wrote

And I'm telling you there's already a point - for some products - where you as a consumer already can't tell the difference and the only difference you're choosing between is what you're told about the product and not the product itself. With continued advancement of AI and robotic manufacturing there will absolutely be a point where being told it's hand-made will be the only way you'd know.

So what are you gonna do when nobody tells you?

And frankly, your example of a Dali painting is a laughably bad example since the art world is absolutely cancerous with fakes that aren't even made with the exactitude robots can achieve. Just by bringing that example up you're abandoning your entire argument, so far as I'm seeing.

1

fd1Jeff t1_jdv3ae8 wrote

A book from the 1980s mentioned that it is easier to make a computer they can do 1 million calculations per second then it is to make a robot that could empty ashtrays. Believe it or not, even simple tasks like emptying ashtrays uses a decent amount of judgment that is difficult to program.

1

RTSBasebuilder t1_jdv0can wrote

Of course, a similar argument was laid out in the first half of the 2000s, when camera companies said "Why have a mediocre 2MP camera on your mobile phone and a compromised storage for music, and a cramped keyboard for emails, when you can get a quality digital camera, a PDA, an MP3 Player, and a mobile phone - all on your belt holster?"

Of course, we all know how that argument went - so long as you can pack enough features at a cheap enough price, and a "good enough" quality, people will take that option, especially if it already fits into their daily routine.

1

phine-phurniture t1_jduwy5y wrote

I would say you are thinking too much but you are spot on...
In an evolutionary sense we are pretty close to the best the monkey model can offer .... If and this is a big if. If we can step back from our instinctive responces and embrace more logic AI and humanity have a future together if not we have maybe 100 years before we fade to black.

2

datsmamail12 t1_jduwxh7 wrote

If it's only limitation is physics and mathematics,just throw it a bunch of papers of that and you'd still wouldn't be impressed by it. But when this technology finally becomes self aware,you'll be the one that said I knew it from the beginning that it was AGI. Do you even comprehend how minor of a problem is not knowing how to do mathematics when you can write novelty,do multitasking, understand every question and answer properly,this is AGI that hasn't been programmed to know what maths are. Id you take a kid make it grow up in a jungle,never show it maths or physics,only show it language,you think that it won't have intelligence? No,it just means that it hasn't been trained on these specific topics..it's just as intelligent as you and I are. Well not me,I'm an idiot,but you people at least.

1

Surur t1_jduv10t wrote

> So basically you are saying the current system is a ponzi scheme endorsed by governments requiring more and more people to keep contributing into it?

The current system being the thing called humanity, yes.

If humans did not take care of their elderly this would not be an issue.

But in Sweden they had a solution for this - it's called Ättestupa.

3

martin_cy t1_jduu0b0 wrote

you might want read: Robin Hanson’s book The Age of Em

basically, he proposes we will be able to make replicas of our existing brains sooner than actually develop real AGI.. this was written before GPT.. so maybe his viewpoint has shifted.. but still, it is a very interesting deep dive into what could happen when minds can be copied and replicated to infinity..

1

circleuranus OP t1_jdut50n wrote

For myself, accuracy isn't even of the greatest concern. Consider this, modern-day reporting requires "eyewitness" to the event or after-effects of the event/events. After all if no human witnesses the event, it's impossible to report on it other than from a historical context. Even if the event is only captured on a camera, a human must view the footage and develop a written history of it. Every step of the process is loaded with biases. Remove those biases and substitute it with a system that is 1000% more accurate with no inherent human biases and you have a digital God on your hands. Even if it were only 2-300% more accurate, it would still be the most reliable information dissemination system ever devised. CNN, Faux News, MSNBC....pointless.

Let's take the example of an everyday event such as a car crash. We come across the scene of an accident and we begin to build a model of what happened based on eyewitness testimony (notoriously unreliable) physical models and explanations of tire marks, impacts etc...and form an opinion based on probabilities. So Car A was likely speeding and t-boned Car B in the intersection.....but.

Enter the Oracle...using a real-time compilation of Tesla and other EV sensor data from nearby vehicles, footage from traffic cams, nearby atms, mobile phones, etc..etc. Shows that in fact Car A was traveling 4 miles under the speed limit and the driver of Car B was actually looking down at their radio at that precise moment and swerved into Car A's lane.

Mundane right? Now extrapolate that into trillions of data points. Google already knows what time I usually get out of bed from the moment I pick up my phone and activate the gyro. It probably knows what type of coffee I drink and how much. It knows what vehicle I drive and what time I leave the house. It knows what route I usually take. It knows what I'm likely wearing that day including pants and shirt sizes. It knows when I went to get my latest haircut, what type of razor I use to shave, where I go to lunch most days, what type of work I do.....and on and on and on. But it not only knows these things about me, but about everyone around me. And that's just Google/Amazon/Bing/Android/Apple etc. Consolidating all of that data and parsing it out to the level of the individual in real time? Terrifying.

You now have a system with trillions upon trillions of bits of data that understands an individual better than they understand themselves. Why wouldn't you trust such a system..? Your own mother doesn't know you as well as the Oracle. Besides the inherent trust in the information that will eventually develop, the moment the system makes even the tiniest most seemingly insignificant prediction with a minuscule accuracy rate, it will still be the most credible and powerful information system in the known universe. A system that will eventually garner blind trust in it's capabilities...and that's game over.

2

circleuranus OP t1_jduqpkp wrote

> became the unquestioned authority on any given topic, but the same is true of human pundits and professors alike.

There is no other system capable of such a thing like AI. Every other system we have is dependent on humans and the trust between humans and their biases. Humans actually seek information from other humans based solely on the commonality of their shared biases. Once you remove the human element, the system just "is". And such a system will be indistinguishable from magic or "the Gods".

1

snikZero t1_jduqibw wrote

The (b) table doesn't seem to show a reduction in temperatures even under the most optimistic case.

SSP1-1.9 shows the total observed temperature increasing (the lighter part of the bar), something like +0.4°C. The darker part is warming to date.

The two optimistic scenarios describe net zero by 2050, followed by net negative emissions into 2081-2100.

 

However your general point that warming can still be managed is likely correct.

1