Recent comments in /f/Futurology

MrZwink t1_jdzawix wrote

Ye I've read papers with the math on this.

If just a few milimeters thick, you would still atleast need mercury and Venus. Remember the sun is hot, so you can't go to far in. You'd think getting a lower energy output star would help, but brown dwarves and red dwarves have unstable ejections and radiate making them unsuitable.

2

izumi3682 OP t1_jdzarvv wrote

>...soon will reach a limit.

OMG! You sound just like 2018 all over again. Well, ok I'll look you up in a year or two, or you can look me up in a year or two and we'll compare notes. I'm not going anywhere; I mean unless I get hit by a truck or something. But I have been continuously in rslashfuturology for nearly ten years now. I pretty much have seen it all. And I will continue to breathlessly report all the latest developments in AI and anything else "futurey" that attracts my attention.

About AI winters. "Limits". AI winters occur when for technical or even science philosophy reasons a wall is hit in progress. The AI winter of the mid 90s to mid 00s, about ten solid years, was because it did not seem to be possible for contemporary computing to realize the long theorized "neural network" first seen in a very primitive form in "Eliza" in 1966. Marvin Minsky, the finest AI scientist of his day, said as much. "The problem seems to be "intractable".

Rising AI scientists like Geoffrey Hinton were basically "alone in the wilderness" struggling to advance the science even a fraction of an increment. But even he had no luck. Now the other element of AI winters is when the investors that had initially seeded these projects with considerable sums of money, begin to think, hmm I don't think this is going to pan out after all. And then the money dries up. A vicious cycle feeds itself. Virtually no progress occurs.

Hinton racked his brain trying to come up with ways to make CPUs realize that elusive neural network. Primitive ones did already exist. But new ideas were needed. I'm not sure how, but Hinton looked at the GPU units that Nvidia was using for advanced (by early 2000 standards) graphics in video games. He probably had significant insight into what he was looking for and likely realized that the same effect that GPUs had on graphics could be used to realize those long sought convolutional neural networks. Further he took a relatively old concept, "back propagation" and used it along with GPUs to almost literally force the CNN into existence. Many other now renowned AI experts were instrumental in this as well.

Hinton, in his typical engineering understatement, said, regarding GPUs; "This seems to work". And from that point forward "narrow AI" began to explode. And explode. And explode. Tens of thousands of narrow AI aided apps to include "Siri" suddenly came into existence. The one that blew me away personally about the year 2015 or 16 was the "Google Translate". The translation font on my iPhone screen was identical to the original font. Even the color was identical if the original font was in color. When I saw how that worked it was like magic, a miracle of technology. Then I had this other app on my iPhone about 2016 called "AIpoly". It was an app that was an experimental sort of beta app for the use of blind people. You set the app to use your iPhone cam to look at objects up close and it would state in text what it saw. I pointed the cam at my hand, my doc i worked with was right there with me. The text said, "The back of a hand". Our jaws collectively dropped. We both said "Whoaaa!!" in genuine and truthful amazement. Then I pointed it at my computer monitor and the text read "computer monitor". There was a way to turn on sound for the blind people but I could not find it. So we just relied on the text. But it could not identify a candy wrapper on the desk. It said, "I am unable to identify this." But OMG! We were blown away.

Two years later in 2018, the first GPT was released with 175 million parameters. And the rest of course is history.

There will never ever again be an "AI winter" for two reasons. The first is that our extant AI is so inextricably entwined into all of human affairs that it is essential that it continuously improves or everything fails. This leads to the second part of the first reason. When Chatgpt released on 30 Nov 22, within one week more than 100 million users had scarfed it up. In less than one week. The fastest technological penetration of society in human recorded history. Of them 100 million users, I'm pretty confident that a goodly percentage of them are AI developers in their own right. And I'm further confident that we shall see an absolute "Cambrian Explosion" of new forms of AI and training algorithms like "transformers" and "diffusion" to name two.

What do you think shall be the next training algorithm to come to our attention. I mean what will it be called. It's coming sure as Christmas. More than likely this year too. And it will be utterly transformational in our efforts to achieve AGI, which i maintain will be in existence NLT 2025.

The second reason is a bit more ominous. Vladimir Putin, stated less than ten years ago I think, that "the first country to develop AGI will control the world". The national defense of the USA, China (PRC), Russia and probably a great many other mostly first world countries depend utterly on ever faster developments in AI. The money is never going to dry up again. And investors know that. BTW Nvidia of GPU fame is working on its own novel form of AI. I don't know when it's going to be released, but it's on the way. Could be this year maybe.

No, AI is going to continue to develop and evolve, some of the evolution on its own--through unanticipated emergent behaviors. But through humans working as hard and as fast as they possibly can to make AGI. Because now, yes, it is a race. And everybody knows it. And like I stated earlier, it is natural and normal that this is happening. It is logical that we are at the point we are at today. Thank the renaissance, the Catholic/Protestant reformation, industrial revolution, the enlightenment, WWII, Eniac, H bombs and "Eliza". Oh! And video games. Further the AI itself will be developing science and technology as a multiplier on top of our now exa-scale computing power. Today that processing speed hovers around 1-1.6 exaflops, but as soon as 2025 it is expected to be between 10-20 exaflops. What are quantum computers up to now? Not sure, they're a bit of a wild card in all this. But I will say this, I suspect it will take quantum computing to realize genuine consciousness in an AI. An "EI" then, may come into existence, and God help us all when (hopefully if, rather) that happens.

2

Dziadzios t1_jdzahid wrote

Dentistry is good for now. First you will be a normal dentist and earn a lot of money. Then tou will just buy progressively better dentistry equipment and one day you will earn by simply owning AI dentist robots.

4

Extreme_History_4400 t1_jdz8q05 wrote

I cannot share the details of how it works publicly, as it is still under development and patent pending. I need to protect my intellectual property and avoid any potential plagiarism or sabotage.

I’ve had a conversation with ChatGPT. You can watch our conversation in the video" https://vimeo.com/812318541". note CHATGPT4 and bing ai more impressed with the system

​

check this post too https://www.reddit.com/r/Futurology/comments/124gj9o/hazre_a_novel_renewable_energy_system_evaluated/

2

TheSensibleTurk t1_jdz7kkb wrote

Did polisci/sociology for BA and international relations/intelligence studies for MA and I couldn't be happier. Currently a fed contractor but have a variety of doors open to be a fed or remain in private sector or become a military officer now that they raised age limits. IMO whether the degree improves your intellectual skills is as important as the subject matter itself.

10

bigapewhat089 t1_jdz6nn3 wrote

So you want the government to tell people what they can and cannot buy. Great idea, go to North Korea. Also it's not like we were aware of this from the start, and once the ball starts rolling you can't just stop it. There is way too much going on here than. "Let's stop using oil" it's not that simple we don't want a total economic collapse, we will cease to exist that way too. Why do you think Biden opened up more oil fields after promising to close them in the states.

0

TheRedBeardedPrick t1_jdz6j87 wrote

Slightly true. In around 27 years or so, it will be relocating to the "new" Equator. The impending magnetic pole excursion is happening now. Ever wonder why the Arora's are gaining more red?? Earth's magnet field is weakening, and more/stronger particles are making them (Arora's) more colourful. This is also the cause of larger and more powerful storms, better lighting, health issues, mental instabilities, the list goes on. This is also the reason why the climate is so screwy too. The Earth, she is a changing. Just wait for the Sun to go micro-novea. That will be a sight to see!!!! Are you ready??

−5

bigapewhat089 t1_jdz60s0 wrote

Don't get wrong wrong, it's good that you live a minimalist lifestyle, mainly for your own wellbeing. But individual pollution is just a drop in the bucket. Most people do not share this sentiment, and it will be impossible for them to do so. A big factor is advertisment, you buy shit cause you see the newest and greatest. But the main reason is humans are evolutionary we want to advance, so we have tons of kids (which is still not enough to sustain the economy) that also pollute. In short, we are fucked. The hope is that some tech comes out which helps stabilize this. No country will stop advancing and have a complete shutdown, put millions of people out of work into the streets, to save the planet.

2

Thatingles t1_jdz5q9c wrote

What really is human intelligence? Are we actually looking at intelligence or just wetware that can gleen information off the environment better than other animals?

See how easy it is to switch that around. Intelligence is relatively easy to define in terms of outputs (I can read and write, a fish cannot) but much harder to define as a property or quality.

Software like the LLM's have some outputs that are as good as a human can produce. Wether they do it through intelligence or enhanced search is an interesting debate, but the outcome is certainly intelligent.

4