Recent comments in /f/Futurology

TechnicalOtaku t1_jdt8s7k wrote

if you know what to look for they're still fairly easy to catch/notice, but those are really low end deepfakes. the genuine good ones that take considerable computer power are a lot more difficult to spot but you can still notice, give it another 5-10 years though and i am afraid you'd need almost an anti-deepfake algoritme to spot them.

1

Silver_Ad_6874 t1_jdt73yo wrote

The upside could be insane. imagine being able to program a CAD program or to create a web app or basically do all sorts of work that are now done by humans. instead these people will be telling machines what to do in natural language so the acceleration to productivity could be enormous. If this Goes South Though de consequences will be bad because yes people will be combining AI with Boston Dynamics advanced new models so Ultimately a "Terminator" scenario is Absolutely possible. What A Timeline To Live in.

For The Record, if true, it confirms some of my suspicions around the nature of human intelligence, but the timeline is much earlear than I expected. 😬

30

Orc_ t1_jdt68wi wrote

> It's not unlikely that in the future you could need Luddites to keep you and your family alive.

Much of the tech leaders today are survivalists.

We don't need luddites, mennonites or the amish for the end of civilization. In fact from what I know directly about mennonites; they are industrial farmers, dunno about the others but they're certainly not some sort of self-sustaining society.

1

Ichipurka t1_jdt5zmb wrote

It's ok if divinity speaks through you.

It does for me but in a different way.

The song of God is universal. Religion is but a costume.

Nothing wrong with gut feelings, and nothing wrong with jokes about costumes.

So. I didn't really mock you, unless you wanted to feel mocked? And so, my comment wasn't directed to you. Your choice for a white robe is fine, I have my own orange robe. They both waver around, and vanish once death comes. =)

Laugh about all the social designations... friend, or brother... they all just go away eventually anyway.

1

WildGrem7 t1_jdt43ti wrote

Think about our physiology, center of gravity and motor controls. There are so many moving parts to think about. Bipedal walking is so much more difficult to emulate than quadrupeds or even better, tank tracks or wheels. Might as well ask why we don’t have Voltron robots or Mech-Warriors instead of tanks and fighter jets.

1

wulfboy01 OP t1_jdt3dio wrote

Thank you for getting back to me. I appreciate your interest in the potential of AI and quantum mechanics to contribute to developing a Grand Unified Theory. While the viewpoints expressed in this blog post are solely for educational and entertainment purposes, it's exciting to explore the connections between recent findings in quantum mechanics, including string networks, entanglement entropy, machine learning techniques, dark matter, and wave-particle duality. The use of AI program algorithm frameworks in this thought experiment shows the potential of AI to aid in scientific inquiry and further advance our understanding of the universe. I'd like to hear your thoughts on the implications of these findings for our understanding of the fundamental principles governing the universe's behavior. I agree the work needs to be re-written or the subsequent articles more thoroughly reviewed before publication.

This was my first attempt to tackle a project such as this, and with no experience, I had to do my best with what little I knew.

Regarding your question, AI was involved in this thought experiment in various ways. It was used to analyze large amounts of data, identify patterns and connections between the different concepts explored in the investigation, and help me develop theoretical models and frameworks to explain these connections. AI was also used to predict the behavior of subatomic particles and refine existing models, which helped give me a more coherent understanding of the universe.

1

bureau44 t1_jdt2wh7 wrote

Those controlling the Oracle must prevent everyone from using other oracles or programming one themselves. If someone is capable of such total control, why would they need any service from an oracle anymore? They can indoctrinate whatever they want.

The bigger problem can arise if everyone (even 'they' in power) will be beguiled by the AI to point that any predictions issued by the AI will turn into self-fullfilling prophecies. Vicious circle.

There is a great sci-fi short story by Greg Egan "The Hundred Light-Year Diary". There is a sort of a time machine that allows people to telegraph news from the future to the past. Obviously everyone tends to blindly believe any information they get from their future self...

1

1714alpha t1_jdt21l2 wrote

Compare this to the current setup.

If you want to predict something, the weather, political events, financial trends, you would call together a body of experts and gather the best available data in order to make a best guess as to what will happen and what to do about it. We know that we're relying on the imperfect judgement of people and the incomplete data that we have available. The experts may be right, or they may be wrong. But it's the best judgement we can offer and the best data available. Anything else would be even less likely to be right. It's the best option available, so we go with it.

Now consider an algorithm that is on average at least as good, or possibly better, than the best experts we have at a given subject. It has all the data the experts themselves can digest and more. Would it be wrong to think that the algorithm might have valuable input with considering? Like any independent expert, you'd want to check with the larger community of experts to e what they think about the algorithm's projections, but in principle, I don't see why it should be discounted just because it came from an AI. Hell, they're are already programs that can diagnose illnesses better than human doctors .

To your point, it would indeed be problematic if any single source of information became the unquestioned authority on any given topic, but the same is true of human pundits and professors alike.

0

iobeson t1_jdt1suh wrote

Spatial awareness is a big hurdle that is only making headway recently with the likes of Tesla bot. They're using the same tech they use in their cars and will most likely be the first commercial humanoid robot because of that. Boston dynamics robots move really well but all their movements are preprogrammed and don't react to the environment around them. If both companies were to work together I think we would see huge strides forward but theres not much chance in that happening.

0

djdefenda t1_jdt1njz wrote

>there exists a conundrum which I feel represents a far greater existential threat to humanity. Trustless information...

This (jokingly) reminds of "fake news" - ie; trustless information = fake news!

​

It is an interesting time, reminds me of a time in history when (please correct me if I'm wrong) there was no printing press and most of the religious 'control' was based upon the fact that most of it was in Latin and the everyday person had no way to verify anything.............then of course the printing press came out (other events too) and people no longer had to blindly follow others, they could interpret things themselves and make up their own minds......

Here we are, in the future, and I see history repeating, ie; Computer code/programming/algorthims has become the new latin.

A possible solution, ironically, is to use AI to "explain it like I'm 5" and let coding be as widespread as english....in other words, anyone can build their own server and load up their own "Oracle" and through prompts such as, "give me the answer for 'X' from 20 different sources.....

The biggest threat, I see (for now) is the privatization of AI and tokens becoming too expensive - in a world with economic collapse and food shortages it's not too hard to imagine buying tokens will become a luxury item (without proper housing or food etc)

3