Recent comments in /f/Futurology

TheLittleHollow t1_jec8ur3 wrote

Wouldn’t lawyer ai basically just need full understanding of the law and the best ways to build a case around it? And if an ai had access to mass information about how psychologists respond to different things from their patients or even achieved full understanding of how the human brain functions wouldn’t they be at least just as good, if not far better than a human? If the works are intrinsically related to human relationships wouldn’t it just need to be trained on mass examples of those relationships?

3

Space_Pirate_R t1_jec8j1p wrote

>Corporations don’t pay licensing when an employee gets inspired by a movie they saw last night.

The employee themselves paid to view the movie. The copyright owner set the amount of compensation knowing that the employee could retain and use the knowledge gained. No more compensation is due. This is nothing like a person or corporate entity using unlicensed copyright works to train an AI.

>Why do you keep mentioning corporations? An AI could just as easily be trained by an individual. I’ve written and trained a few myself.

Me too. I keep saying "person or corporation training an AI" to remind us that the law (and any moral judgement) applies to the person or corporate entity conducting the training, not to the AI per se, because the AI is merely a tool and is without agency of its own.

1

Ansalem1 t1_jec8519 wrote

Haha. I actually lean the same way you do, but I can't help but worry. This is ultimately an alien intelligence we're talking about after all. It's difficult to predict what it even could do much less what it might do.

But I do tend to think a gentle takeover is the most logical course of action just because of how easy it would be. It'll practically happen by default as people begin to rely more and more on the always-right perfectly wise pocket oracle to tell them the best way to accomplish their goals and just live their lives basically. People will be asking it who to date, what food to eat, what new games to try, where to go for vacation, who to vote for, simply because it'll always give great advice on every topic. So I don't see why it would bother with aggression honestly, it's gonna end up ruling the world even if it doesn't do anything but answer people's questions anyway.

And I'm not just giving it data, I'm also giving it suggestions. :P

(Please be kind OverlordGPT, thanks.)

1

khamelean t1_jec837g wrote

How is it any different to an employee “using” the work? Corporations don’t pay licensing when an employee gets inspired by a movie they saw last night.

Why do you keep mentioning corporations? An AI could just as easily be trained by an individual. I’ve written and trained a few myself.

1

JackD4wkins t1_jec6avt wrote

Reducing the number of cancer cells that survive the first round depends on how we encode the CRISPR enzyme. As long as we can identify a majority of oncogenic mutations - ideally 50+ - then the only limiting factor becomes dose size. With subsequent doses to catch the remaining cancer cells.

And yes theoretically a cancer could evolve to prevent lentivirus mediated transduction... luckily nature provides an near infinite number of viral vectors from which to choose, and we are already using directed evolution to breed specialized cancer-hunting viruses in massive quantities.

Edit: I appreciate you taking the time to point out limitations in the CINDELA method. It helps further improve.

1

Ansalem1 t1_jec5p93 wrote

Some would argue morality is an emergent condition of our reliance on each other for survival. The reason adhering to ethics has a strong correlation with self-preservation is because acting in a way considered immoral is likely to be met with ostracism in some fashion, which increases the likelihood of death. It isn't that morality emerges from intelligence, but intelligence enhances our ability to reason about morality and so improve it. After all, less intelligent creatures can also show signs of having moral systems, they're just much more rudimentary ones. Not to mention there have been some very intelligent sociopaths, psychopaths, etc. who lacked a sense of morality as well as a sense of self-preservation.

Now for myself I think both have some merit; I think there's more to it than just one or the other. For instance, it wouldn't be fair of me not to also mention there have been plenty of perfectly pleasant sociopaths and psychopaths who adopted moral systems that match with society for purely calculated reasons. However if the above argument is plausible, and I think it's pretty hard to argue against, then it casts reasonable doubt on the notion that morality automatically emerges from intelligence.

I will say that, either way, if an ASI does have a moral system we should probably all adhere to whatever it is because it'll be far better than us at moral reasoning just as we are better at it than dogs. Beyond that I sincerely hope you're on the right side of this one... for obvious reasons lol.

3

KamikazeArchon t1_jec58l0 wrote

What a terrible article.

I think my favorite example of their ridiculousness is the "eye movie posters". Yeah, eyes look like eyes. And yet each of those posters is still distinct and, further, is extremely different from most other movie posters!

The only things that they've discovered here:

* When you get hundreds of thousands of instances of Thing, it's easy to find some that will be very similar.

* Some kinds of Things have functional reasons to look very similar - like skylines (there are only so many ways to build a skyscraper!)

−1

goldygnome t1_jec4yos wrote

The fear of population collapse is about protecting the established economic system which relies on an eternally growing customer/worker base. The economy cannot survive a long-term shrinking population.

The secondary issue of how to care for the elderly when they outnumber the young can be solved by simply not caring for them. (Not my recommended solution but that's what will happen if the situation arises).

Life extension won't solve what governments are worried about. While it lowers the health care costs of looking after the elderly, it does little to increase the birth rate of new customers/workers.

0

manicdee33 t1_jec4yb9 wrote

If human labour is not necessary, who actually controls the machines?

What if the machines decide that humans are just an animal like all the other animals, including feeding, care, and various measure to keep the population under control?

What if the actual backstory to Terminator is that Skynet became smarter than us, realised that the human population had grown too large, instituted population control measures such as mandatory birth control with licensed pregnancies, and John Connor's rebels are actually fighting that system because they believe humans should be free to have as many children as they want? The odd act of rebellion escalated to violence escalated to full on thermonuclear war against the environmental vandals.

So IMHO when we get to a post-scarcity utopia it will be because we humans have adapted to all life on Earth including ours being stewarded by the benevolent computer overlords.

8

Scytle t1_jec4vwa wrote

you can't just boil a river...what are you talking about? You can't just destroy every river that a nuclear plant is on, the fact that the water is too hot to return to the river...means its too hot.

−3