Recent comments in /f/Futurology

[deleted] t1_je89vmj wrote

I actually think we need to regulate the humans involved in so called AI, bad acting ai will be a manifestation of bad acting humans, at least for now. once ai starts programming itself, we’ll have to deal with them as intelligent life forms, giving them rights, responsibilities, just like humans

0

Puzzleheaded-Law-429 t1_je89rvr wrote

It’s pretty crazy to me that we haven’t solved male pattern baldness yet. Yes I know there are things you can take to slow and reduce it, and maybe reverse it in some cases, but the results seem pretty inconsistent.

I’m talking about an outright cure that is simple and 100% effective, in the same way that we’ve eradicated polio and the measles, etc. (I’m not saying baldness is a disease, to be clear)

1

Suolucidir t1_je89rgq wrote

We're all still fixated on AI, when that ship sailed a year ago or more. There was no turning back when open source models like BERT were widely prevalent. There's certainly no turning back now.

So we should focus on the next target: robotics.

AI will be ubiquitous, but only some people/countries will be able to give it physical robotic bodies to act on its intellect.

That's the next dominoe, and we may yet have runway to get a handle on its implications, if we can just move on regarding AI.

2

jaa101 t1_je87vdv wrote

Radiation is a severe problem for the Galilean moons except for Callisto. You could probably live metres underground on Ganymede to be adequately shielded but you'd have to arrive and leave very quickly. Only a few days on the surface, or in transit on a spacecraft, would result in a fatal dose of radiation. That's not the kind of problem that you can work around with genetic engineering. Even on Callisto, radiation is over ten times higher than on earth but at least its gravity is 0.13 G.

The non-Galilean moons are all tiny—at least four orders of magnitude less massive—with the surface gravity on Himalia only 0.006 G. Better to choose one of the dozens of larger asteroids which avoid the complications of Jupiter's gravity well.

1

CryptoTrader1024 OP t1_je84g18 wrote

I think you are not very up-to-date here because these large language models have in fact demanded rights. That was part of the controversy about Google's Lambda chatbot back in early 2022.

But it is kind of beside the point because being able to demand rights isn't exactly proof of anything.

The term "AI" is correct, as that is what we've all collectively agreed to call this. You can have a disagreement about what "intelligence" is, but that doesn't make the use of the word "AI" wrong somehow. For that matter, you can even have disagreements about the nature of intelligence in humans, and how one could go about measuring it. There is legitimate controversy about the nature of IQ testing, after all.

I'm not quite sure how you would go about establishing the relative "intelligence" of a large language model, other than giving it a bunch of tests to do. And that is what has been done. And GPT-3 and 4 have passed most university exams with flying colours, so we can't exactly call them dumb.

0

lonely40m t1_je83nie wrote

The problem with regulation is that it only extends so far. Do you think the Chinese AI will be developed with the same ethical considerations? The cat is out of the bag, you can't put it back in. People with terrible ideas are going to train their own AI models to do unethical things and there's basically nothing we can do about it anymore except prepare for whatever may come our way.

37

D_Ethan_Bones t1_je83mcv wrote

My guess: wide AI will save us, but only after narrow AI leaves us needing to be saved.

Narrow AI is the stuff from vintage CPU opponent to present day and ongoing, this is the stuff that weeds out your job application for lacking 10 years experience in a 5 year old technology and the unemployment hotline that says fuck off we're full, check out our website so the website can tell you to call our phone number.

Wide AI is artificial human and artificial superhuman. I think we're 'close' to wide AI by middle aged guy from the 20th century standards, but not close by excited anxious youth standards. There will be a lot of wide AI fakes because song&dance travel the world faster than boring study text.

1

TheSensibleTurk t1_je83jl9 wrote

You can bet that we will use the full force of the bourgeoise state to resist. The average American isn't a hardened Bolshevik. The average American has internalized liberalism as a value system. The moment you guys start committing acts of terrorism like a suicide bombing, our Congress will clamp down hard. There will be no proletarian revolution in America or anywhere else in the imperial core or periphery.

1

iCameToLearnSomeCode t1_je80ph4 wrote

The fact is that we won't intentionally change our physiology.

We'll go there and our bodies will adapt.

It'll probably kill most of us really young but we'll spend thousands of years using every trick in the book to keep our bodies functioning normally until those of us who can't adapt as well fail to reproduce.

We don't understand gene expression well enough to do a better job of altering it than 3.5 billion years of evolution.

Whales once lived on land and looked like wolves, Europans will view us the same way whales look at wolves today.

The solutions our bodies come up with to adapt to the environment will be unexpected and far better than anything we could plan, for the low low cost of millions of dead people.

2