Recent comments in /f/Futurology
circleuranus OP t1_jduq1th wrote
Reply to comment by nobodyisonething in A Problem That Keeps Me Up At Night. by circleuranus
> However, predicting beyond the capacity of any human that ever lived or ever will live is something we can expect -- perhaps soon.
That is precisely the root of my concern. However a sufficiently powerful AI with historical data inputs will also be able to create a causal web, a "blueprint" of history with infinitely more connective strands of causality.
Think of the game "6 degrees of Kevin Bacon" for instance...a sufficiently powerful and well outfitted AI will not only be able to connect Kevin Bacon to every actor that exists, it will be able to make a connection to every person on earth that exists or has ever existed for which we have data. AND eventually for persons for which we don't have data. The AI will be able to "fill the gaps" in our understanding of history and generate a weighted probability of the "missing person" in a particular timeline.
Let's take a basic example of a historical event such as Caeser crossing the Rubicon. With sufficient referential data, we might be able to know the actual size of his army, the name of every man in that army, how many horses exactly, the weather of that day, the depth of the river on that day, the amount of time it actually took to make the crossing...in other words a complete picture.
We may be able to determine that Caeser crossed in just a few hours and was in the town of Remini by 1 o'clock. etc etc...
Once the system "cleans up" our history, it can begin work on current events...once it has a base of current statuses, it can then work on predictive models.
Mike shows up to work 10 minutes early without fail, Beth shows up to work exactly on time most of the time, Jeff is usually 5 minutes late, however Jeff's output outweighs Mike's output so his value add is higher even if he arrives late most days. Jeff is younger and in better physical condition than Beth so is likely to live longer and therefore fulfill his position at work for a longer period without interruptions of illness or disease. And this is just one officescenario for 1 company...tune that all the way up and the AI will be able to tell if Mike brought chicken salad or ham and cheese for lunch.
echohole5 t1_jdupn21 wrote
Reply to Taxes in A.I dominated labour market by Newhereeeeee
Yep, government will have no choice but to take some company profits, in some way, as companies will be the only entities with any money.
We might want to look at a sovereign wealth fund, like Norway has. The government could buy up 40% of every stock. The growth in company profits is about to go hyperbolic, now that they won't have labor costs. It might be a way to redistribute wealth from companies to citizens that isn't as adversarial as high taxes (which are also very easy for companies to avoid). It would also align the interests of companies, citizens and governments behind growth.
Just a thought.
[deleted] OP t1_jdupixx wrote
echohole5 t1_jduoswn wrote
Reply to A Problem That Keeps Me Up At Night. by circleuranus
We're kind of already there. We just haven't realized it yet.
4354574 t1_jdunko1 wrote
Reply to comment by Surur in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
I don't. It's the classic "problem of other minds". This is not an issue for Buddhism and the Yogic tradition, however, and ultimately at the highest level all of the mystical traditions, whether Sufism, Christian mysticism (St. John of the Cross and others), shamanism, the Kabbalah etc. What's important to these traditions is what your own individual experience of being conscious is like. More precisely, from a subjective POV, there are no "other minds" - it's all the same mind experiencing itself as what it thinks are separate minds.
If your experience of being conscious is innately freeing, and infinite, and unified, and fearless, and joyous, as they all, cross-culturally and across time, claim the state of being called 'enlightenment' is, then whether there are other minds or not is academic. You help other people to walk the path to enlightenment because they perceive *themselves* to be isolated, fearful, angry, grieving individual minds, that still perceive the idea that there are "other minds" to be a problem.
In Buddhism, the classic answer to people troubled by unanswerable questions is that the question does not go away, but the 'questioner' does. You don't care about the answer anymore, because you've seen through the illusion that there was anyone who wanted an answer in the first place.
[deleted] OP t1_jdumza6 wrote
Reply to comment by [deleted] in Have deepfakes become so realistic that they can fool people into thinking they are genuine? by [deleted]
[removed]
pharmamess t1_jdum003 wrote
Reply to comment by Malachiian in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
What about the soul?
Rk3h t1_jdulo1n wrote
Reply to comment by ArgosHound in Have deepfakes become so realistic that they can fool people into thinking they are genuine? by [deleted]
in the fashion era I didn't even second guess it. I just thought damn the Pope's cool af
circleuranus OP t1_jdul6n6 wrote
Reply to comment by Benedicts_Twin in A Problem That Keeps Me Up At Night. by circleuranus
Precisely. The intent of those weilding such a weapon is almost an afterthought.
Take as an example, Wikipedia in its most basic form. As a source of knowledge, it is open to subversion of fact and historical reference. Supposing one were to edit the page concerning the line of succession of Roman Emporers and rearranged them to be out of proper chonological order. Even if this false blueprint existed for only a day, how many people around the world would have absorbed this false data and left with a false understanding of something relatively insignificant as the order of succession of Roman Emporers. How many different strands of the causal web will those false beliefs touch throughout the lifetime of the person harboring them? If we extrapolate this into a systemic problem of truth value and design an information system with orders of magnitude beyond the basic flat reference of a Wikipedia...the possibilities for corruption and dissemination of false data becomes unimaginable. A trustless system of information in the wrong hands would be indistinguishable from a God.
circleuranus OP t1_jduk794 wrote
Reply to comment by k3surfacer in A Problem That Keeps Me Up At Night. by circleuranus
That's a lovely little aphorism, but unfortunately one devoid of any meaning or substance.
All sources of truth are controlled/controllable. Even those deemed internal and existential truths. Leaving aside dialectical materialism, the point is that any system capable of convincing mankind of the absolute value of its knowledge systems is a greater threat to humanity than the most complex weapons systems ever devised thus far.
BackOnFire8921 t1_jdujwgx wrote
Reply to comment by circleuranus in A Problem That Keeps Me Up At Night. by circleuranus
Seems like a good thing though. An artificial god to lead stupid monkeys...
circleuranus OP t1_jdujn85 wrote
Reply to comment by BackOnFire8921 in A Problem That Keeps Me Up At Night. by circleuranus
Alignment with human values, goals, and morals is THE problem of AI that everyone from Hawking to Bostrum to Harris have concerned themselves with. And arguably so, if we create an AI designed to maximize well-being and reduce human suffering, it may decide the best way to relieve human suffering is for us not to exist at all. This falls under the "Vulnerable World Hypothesis". However it's my position, that a far more imminent threat will be one of our own making with much less complexity required. It has been demonstrated in study after study how vulnerable the belief systems of humans are to capture. The neural mechanisms of belief formation are rather well documented if not completely dissected and understood on a molecular level. An AI with the sum of all human knowledge at its disposal, will eventually create a "map" of history with a deeper understanding of the causal web than anyone has ever previously imagined. The moment that same AI becomes even fractionally predictive, it will be on par with all of the gods imagined from Mt. Olympus to Mt. Sinai.
Brittainicus t1_jdujl10 wrote
Reply to comment by WideCardiologist3323 in Have deepfakes become so realistic that they can fool people into thinking they are genuine? by [deleted]
Deepfakes use AI to generate their images using machine learning
methods through training a data set. Its just that AI is now able to copy more than just photorealistic styles as well now.
URF_reibeer t1_jdujj9h wrote
Reply to Why are humanoid robots so hard? by JayR_97
for one they don't really make sense, humans have developed specifically for tasks we don't really need anymore (like throwing far and hard, outrunning prey by having more stamina, etc.) so creating robots that are tailored for actual relevant tasks just has a higher priority.
additionally the uncanny valley effect sets the bar for a humanoid robot that's accepted by society quite high
the only real market would be for super rich people that don't mind buying a ridiculously overprized robot that's only real advantage is that it can use tools made for humans, they won't be cheap even mass produced because they're way too complicated
whotheff t1_jduil23 wrote
Reply to Why are humanoid robots so hard? by JayR_97
It's because they are... too hard! People are soft body beings, consisting of thousands of muscles and bones. While robots are stiff metallic objects, moving with only a few motors. Our bodies can bend quite a lot, swim, jump, roll, dance, etc.
The only downside is that the knowledge. Your Dad can manage to transfer his life experience to you in a matter of years. And you'll be able to understand most of it it around your 16th year. If you teach a robot to dance, you can instantly (or in a matter of hours) transfer that knowledge to another robot with the same design.
​
Eventually robots can and will become better than humans in many things. But it is going to take many years (unless we kill each-other first).
NazmanJT t1_jduhc4b wrote
Reply to comment by RastaNecromanca in Why are humanoid robots so hard? by JayR_97
There is a market for humanoid robotic baristas. Many people don't trust machines to make their coffee. If they can see a humanoid robot make it in front of their eyes, then trust will be higher and baristas can be replaced.
[deleted] OP t1_jdugpqf wrote
Phoenix5869 t1_jdufm0m wrote
Reply to comment by Malachiian in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
> It's done by 14 PhDs
exactly. No PhD is going to make a claim like that if they are not 100% sure of the validity of that claim
Silver_Ad_6874 t1_jduf5ud wrote
Reply to comment by 808_Scalawag in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
Actually, like Tesla demonstrates with the remaining lack of true FSD,, interpreting the surroundings accurately may be more difficult than reasoning about those surroundings for now.
Silver_Ad_6874 t1_jduf07p wrote
Reply to comment by jetro30087 in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
The difference is emerging behaviour. If a sufficiently complex, self adapting structure can modify itself to perform more than it was trained for, the outcome is unknown. Unknown outcomes scare people.
Silver_Ad_6874 t1_jdueq2n wrote
Reply to comment by Malachiian in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
Exactly that. If the complexity of the human mind automatically emerges from a relatively simpel model with sufficiently advanced training/inputs, that would be very telling.
BilingualThrowaway01 t1_jdudp5a wrote
Reply to comment by phyto123 in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
Life always finds the path of least residence through natural selection. It will always gradually tend towards being more efficient over time through evolutionary pressure. The Fibonacci sequence and golden ratio happen to be geometrically efficient ratios to use when it comes to many physical distributions, for example when deciding how to place leaves in a spiral that will collect as much sunlight as possible.
crunchycrispy t1_jdudoni wrote
Reply to comment by IluvBsissa in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
it’s actually very important, or else it will be unreliable and unpredictable in tons of hidden ways.
BerylTorie t1_jdudmm7 wrote
Reply to Have deepfakes become so realistic that they can fool people into thinking they are genuine? by [deleted]
Fight fire with fire. We can use AI to recognize deepfakes.
circleuranus OP t1_jduqgx0 wrote
Reply to comment by HonestCup20 in A Problem That Keeps Me Up At Night. by circleuranus
Yes but think of what you've given in return for it. Google knows so much about you if you could read a printout, it would likely terrify you.
And we've pretty much accepted that Google and the like are now the gatekeepers for the internet. They choose what you see based on their algorithms when you perform a search. They choose what business you see first, what type of information you see first, et al. For all practical purposes, the only way a business can complete is to pay Google for business listings and front page search results. This paradigm has far reaching consequences.