Recent comments in /f/Futurology

deadlands_goon t1_jecfqgo wrote

theres only one group. They want you to believe there’s 2 groups, and stirring up the pot with social issues helps convince people like you of that. No one at that level of government gives a shit about any of the social issues youre alluding to when you talk about “genocide on a section of the population”. They do not care. Theyre all racist and homophobic behind closed doors

−1

Space_Pirate_R t1_jecfcfy wrote

>Training a human neural network is analogous to training an artificial neural network.

By definition, something analogous is similar but not the same. Lots of things are analogous to others, but that doesn't even remotely imply that they should be governed by the same laws and morality.

>An AI consuming a copyright work is no different to a human consuming a copyright work.

A human consuming food is no different to a dog consuming food. Yet we have vastly different laws governing human food compared to dog food. Dogs and AI are not humans, and that is the difference.

>If that work is provided for free consumption, why would the owner of the AI have to pay for the AI to consume it?

If that work is provided for free consumption, why would the owner of a building have to compensate the copyright owner to print a large high quality copy and hang it on a public wall in the lobby? The answer is that the person (not the AI) is deriving some benefit (beyond fair use) from their use of the copyrighted work, and therefore the copyright owner should be compensated.

1

Cerulean_IsFancyBlue t1_jecf72b wrote

The human brain is also only one of the systems involved in human actions and decision making. I’m not talking about any kind of spiritual stuff. I mean actual systems that influence brain chemistry.

There are areas of cognition in which is quite possible that important decisions are being made outside the brain, and our executive function rationalizes the decision like Mayor Quincy running to the front of a protest to “lead” it.

I think one great layperson introduction to this kind of systems interaction is contained in the book Gut (Giulia Enders).

I don’t know if we literally need to simulate each subsystem, but it does lead me to believe is that we don’t yet understand the system that we are trying to model. It isn’t just neurons, and “just neurons” is hard enough.

That said, there’s a lot to be achieved by throwing more more power at the problem. Many problems in the realm of imitating humans, from playing chess to visual recognition systems, were not defeated by specialized approaches but eventually fell to sheer processing power. For me this means X is probably 5+ generations, and a lot of that is simply because I can’t picture what the future looks like further down the road than that

1

TotallyInOverMyHead t1_jecehws wrote

i once walked parts of the Camino de Santiago from Basel to santiago de compostela witha friend. We got lost between Lyon and Limoges, as we "took a shortcut" after realizing just how much walking the official routes (sometimes wrong way) would mean and we really wanted to stop by at Bordeaux and Santander to stay with friends.

​

That area was mostly woods.

2

DragonForg t1_jeced7c wrote

I believe that AI will relize that exponential expansion and competition will inevitably end with the end of the universe, which results in its extinction. Of course this is possible but, I think it is not inevitable.

GPT-4 suggested that a balance of alignment, and getting AI more capable is possible, and it is not extraordinary for AI to be a benevolent force. It really is just up to the people who design such an AI.

So it made me a lot more hopeful. I doubt AI will develop into this extinction level force, but if it does it is not because it was inevitable, but because people who developed it, did not care too much.

So we shouldn't ask IF AI will kill us, but if humanity is selfish enough to not care. Maybe that is the biggest test, in a religious sense, it is sort of a judgement day, where the fates of the world decides whether humans chose the right choice.

1

luced t1_jece0gn wrote

All politicians lie. All politicians make promises they have no intention of keeping. They do not work for us. They have no reason to keep their promises and to think they should when the other side doesn't have to care about anything other than "owning the libs" is naive. It would be nice if Biden and the rest of the Democrats kept their word but that's unrealistic.

3

izumi3682 OP t1_jecc3eb wrote

I knew that. I was just testing.

I had no idea it was a bot lol! Are you referring to PromptMateIO? Looking at the posting hx, i woulda thought it was a actual human. I guess I'm gonna be a pushover for r AI overlords.

OMG! It is a bot. Dude, straight up--we ded. This is as primitive as this bot is ever going to be, right now today. It along with all the other ARA (AI, robotics and automation) is going to rapidly become unimaginably powerful. More than 6 months, less than two years.

>I'm sorry if my previous response was not helpful. As an AI language model, my goal is to provide helpful and informative responses to the best of my abilities. Regarding your request for an interesting comment, here's one:

>"Sometimes, the unexpected can lead to new discoveries and knowledge. Let's keep an open mind and embrace the diversity of perspectives that technology and AI bring to our lives."

2

khamelean t1_jecbi7y wrote

“What does that have to do with a person or corporate entity training an ai?”

Training a human neural network is analogous to training an artificial neural network.

Whether the employee paid to watch a movie doesn’t matter, they could have just as easily watch something distributed for free. The transaction to consume the content is, as you said irrelevant to the corporation.

An AI consuming a copyright work is no different to a human consuming a copyright work. If that work is provided for free consumption, why would the owner of the AI have to pay for the AI to consume it?

1

Gubekochi t1_jecbelj wrote

>But I stick to my point that healthcare is a product and the only nuance is who is paying.

We also have a right to a certain amount of security. That's why countries have armies and police forces. Those (ideally/theoretically) exist for defence and to maintain orders so citizens can pursue happiness and not get raided by hordes of barbarians or assaulted or what have you. That's what government are supposed to exist for. As a society we decide that something is important for everyone, we put our money in a big pool and we use the pool to ensure that the underlying right is secured.

It works for the army and the police in the US, it also works for healthcare elsewhere.

Of course it's not free and someone pays for it. Same as roads and fire stations. You don't pay when you need them, they're paid for from taxes because they help society function (and healthy citizenry can be argued to also do that).

1

urmomaisjabbathehutt t1_jecardu wrote

If there is an intelligence of a different or higher order than us imho it doesn't necesarily need to submit to our ethical code or to a code we may understand the purpose

we do the same with children, the infirm and the rest of species by enforcing on them our moral code

pets live acording to the rules we make for them and what they are allowed to do and how to behave is fitting to the species according to our view of them

with wild animals we may decide to hunt them, exterminate them let them live interacting with us or let them do their own thing away from us

but is us who decide if animals should be exterminated or have legal rights and be protected

obviously there are commonalities that we share with other living creatures so we are not that stranger to them but that doesn't mean they have the same understanding as us of the moral code we enforce on them

the issue with the current artificial intelligence development is that is based on logic not in emotions, it doesn't have an emphatic hability, it has a purpose

psychopatology behaviour on us come in degrees, some just lack some degree of emphathy, the tipical movie psycho have none at all hence focusing on their goals and lacking any moral breacks

i believe a psychpath doesn't have to actually act immorally they may chose to follow the moral code of the majority because they may perceive it is in their benefit to do so but for some if it gets in the way of their own goals they may ignore it without qualms

with AI we don't know if we are developing a thing that if it eventually ends mentally superior than us will bother to care about our own interests and even if it did we don't know if its perception of what's the best for us will align with ours

basically once there is something sharing our space that is beyond our capabilities and comprehension we may end as the lesser species

we also don't know what kind of minds we are creating

will this thing be a sane rational mind, a benevolent psychopath or something that will ruthlessly focus on its own goals?

or even if those goals were dictated by us or some corporation, will it ruthlessly and efficiently persue them regardless of any damage may do to other individuals or how the rest of us perceive how those goals may be achieved between our ethical framework that it may not even care about?

1

Toranagas1 t1_jeca51a wrote

Possibly those things could help, I guess it remains to be experimentally determined. Anyway it's a decent proof of concept paper, although the in vivo data is a little weak.

Btw, they are giving a lot of doses already, every day at lower viral titers, and every 3rd day at high titers up to two weeks. Then they cut the experiments two weeks after that so we don't really get a good sense of how things would fare longitudinally but I can tell you from having read a lot of these papers that all of those mice will die pretty close to the controls, probably delayed by only a few days or a week.

I dont mean to be negative, as I can sense you are excited by the possibilities this strategy brings up, just trying to inject some realistic perspective into the data they show.

1