Submitted by 96suluman t3_zp5the in singularity
[removed]
Submitted by 96suluman t3_zp5the in singularity
[removed]
It will but, it will take some time.
Improbable. I've increasingly come around to the view that the integration of narrow AI into R&D is the form building blocks of the singularity will take. Contrastingly, while I expect AGI by the middle of the decade, I think it's more likely to be along the weak definition (ala Metaculus or its historical meaning). What this means is we're likely to see a gradual increase in the pace of technological advancement, but it's likely to start in parallel to AGI.
Exactly. If you were to fall into a black hole, you wouldn't be able to notice when you crossed the event horizon, the point at which your collapse toward the singularity was inevitable. I think we've crossed the AI event horizon.
Definitely not.. We will stop it because we fear things that we can't control. Unless the alignment problem is solved by then. Of course that will be the main story but behind closed doors some organisation's will try to go further and solve it.
That is the risk actually.. It will be same as nukes. Heavily regulated.
No. There are first principles reasons why the physical world cant change instantly, human civilization is very large and will take much time to alter.
What is the Technological Singularity?
I have a feeling AI will be the ones to self make AGI, with some help from humans ofc.
ChatGPT took 5 days to impact a million people. I suspect the singularity will be like that only with bigger impact.
To me, AGI is just a program with the IQ of your average person. I don’t see how that will lead to a singularity.
I think it will be between 2023 (yes really, one person wants to do it next year) and 2060
To me it's a computer that does computery stuff. And 'human' stuff.
I've tried to game out a long time line, and it just doesn't work. Even presuming a horribly destructive world war, or running into a yet unseen bottleneck, I can't game out anything close to that. The problem for assuming such a thing is that just what's been individually shown to date indicates we should reach AGI shortly. So while I could see it being delayed to later in the decade, anything past that seems like a one percent or less situation.
Probably not instantly, but I wouldn't guess it'd take too too long. Maybe a few years at most.
There are other advantages that computers inherently have over people that aren't captured by IQ. For instance, speed, and direct thought-access to calculators and computational resources, and an ability to run at full capacity 24/7 without needing time to sleep or unwind.
Intelligence needs embodiment. It needs a physical body and the more it it is like us, the more we can talk about AGI and singularity and you name what. To this date all we have are fancy names, hypes and a ton of people who are not even aware what intelligence mean but talk about singularity. So, don't worry, nothing will happen even in the next 50 years. We will only encouter only more mathematical formulas for which nature has no use at all.
I hear this cited a lot, but isn’t that exactly what a motivated human brain with internet connection is?
There’s still an X factor intelligence alone can’t implement to generate discovery.
AI event horizon is a good way to describe it.
One you make one average IQ computer it won't be long before you can then make an army of them that work 24/7 basically for free. It's the scaling of it that is important.
no. its a wet dream. the best case scenario leads to human race wipeout lol
so... we'll have cyber war and ww3 before the timeframe we believe? cool LMAO
do we? LOL ? einstein was right. so was ted.
ASI is likely what’s going to trigger the version of the singularity you’re referring to. AGI could very well be what creates an ASI.
Roko’s Basilisk
That's a matter of goalpost moving, we've obviously crossed multiple qualifiers for the "singularity" such as having the processing power to simulate an environment, and developing systems beyond human understanding, so the goal posts have been moved by people who are uncomfortable with the idea that the singularity already happened. (not that it really comforts me, but basically that)
​
so the current goalpost of "singularity" is an active AGI.
Once we reach AGI... ASI will come soon after, I would say within 10 to 15 years, because of scaling as one person said in here. First we will have one AGI machine. After that we will have two, then hundreds, then thousands, then millions of AGI machines working on different problems, 24/7 with no breaks.
It is hard to say, but it may have the capacity to happen very quickly, yes. If it is able to improve its own mechanism of intelligence, thus making it capable of improving it further in a positive feedback loop, then yeah it could potentially happen very quickly.
Exactly. AGI with an identical level of intelligence and computational capacity as a human would have significant advantages over humans. Like:
Hardware:
Speed. The brain’s neurons max out at around 200 Hz, while today’s microprocessors (which are much slower than they will be when we reach AGI) run in the GHz range, on the order of 10 million times faster than our neurons. And the brain’s internal communications, which can move at about 120 m/s, are horribly outmatched by a computer’s ability to communicate optically at the speed of light.
Size and storage. The brain is locked into its size by the shape of our skulls, and it couldn’t get much bigger anyway, or the 120 m/s internal communications would take too long to get from one brain structure to another. Computers can expand to any physical size, allowing far more hardware to be put to work, a much larger working memory (RAM), and a longterm memory (hard drive storage) that has both far greater capacity and precision than our own.
Reliability and durability. It’s not only the memories of a computer that would be more precise. Computer transistors are more accurate than biological neurons, and they’re less likely to deteriorate (and can be repaired or replaced if they do). Human brains also get fatigued easily, while computers can run nonstop, at peak performance, 24/7.
Software:
Editability, upgradability, and a wider breadth of possibility. Unlike the human brain, computer software can receive updates and fixes and can be easily experimented on. The upgrades could also span to areas where human brains are weak. Human vision software is superbly advanced, while its complex engineering capability is pretty low-grade. Computers could match the human on vision software but could also become equally optimized in engineering and any other area.
Collective capability. Humans crush all other species at building a vast collective intelligence. Beginning with the development of language and the forming of large, dense communities, advancing through the inventions of writing and printing, and now intensified through tools like the internet, humanity’s collective intelligence is one of the major reasons we’ve been able to get so far ahead of all other species. And computers will be way better at it than we are. A worldwide network of AI running a particular program could regularly sync with itself so that anything any one computer learned would be instantly uploaded to all other computers. The group could also take on one goal as a unit, because there wouldn’t necessarily be dissenting opinions and motivations and self-interest, like we have within the human population.
Thanks for the thoughts on this topic, also remember it won’t have the human brain’s biases and inefficiencies
I didn't literally mean an army but you're not wrong
We can copy and paste it. It’s not just one human equivalent. You create an army.
No, but it means the Singularity isn’t far off.
I think his point is once you have an AI that can do anything, it will be able to improve on itself so fast that it will just surpass is in no time.
Yes.
AGI is a relativistic term as no intelligence is completely general, not even human intelligence. But when people use it they often refer to intelligence that is at least as capable and general as most humans.
When AGI get's to this point it will be vastly more intelligent than humans because of all of the advantageous that it computers already have over human brains. It will be able to re-write it's own code, perform tasks for pay and use that money to buy more server farms, and optimize existing hardware, and buy more solar farms, as well as design more cost effective chips and solar farms.
That is what we refer to as the "intelligence explosion" singularity. It's a feedback loop that starts when AGI reaches human capabilities.
Oh no, it took our jobs!
Wait, I’m still on payroll and am just now directing agi…I’m ok with this
Maybe when quantum capabilities are added in
Far from it. A human brain is physically limited. An AGI has the capacity to expand to its requirements. That same X factor was deemed impossible to the generation of the Arts, too, in the very recent past.
SirDidymus t1_j0qwzqs wrote
I don’t see why not. Anything with the power and agency to exponentially better itself has no foreseeable limits.