Recent comments in /f/Futurology
ItsAConspiracy t1_jdv527c wrote
Reply to What happens if it turns out that being human is not that difficult to duplicate in a machine? What if we're just ... well ... copyable? by RamaSchneider
You don't need conscious awareness to beat me in chess, and maybe you don't need it to beat all of us and take everything.
So what worries me is not that we're copyable, but that maybe we're not. We can't prove whether another person or an animal actually experiences qualia. What if the machine doesn't, but wipes out all life anyway? Then the light of consciousness would go out of the world.
aeusoes1 t1_jdv4cfk wrote
Reply to comment by RachelRegina in Have deepfakes become so realistic that they can fool people into thinking they are genuine? by [deleted]
I thought of that, but it didn't really sound right Probably because the "deep" in deep fake is short for deep learning.
Longjumping-Tie-7573 t1_jdv3jb6 wrote
Reply to comment by augustulus1 in What jobs cannot be done by machines? by Spirited-Meringue829
And I'm telling you there's already a point - for some products - where you as a consumer already can't tell the difference and the only difference you're choosing between is what you're told about the product and not the product itself. With continued advancement of AI and robotic manufacturing there will absolutely be a point where being told it's hand-made will be the only way you'd know.
So what are you gonna do when nobody tells you?
And frankly, your example of a Dali painting is a laughably bad example since the art world is absolutely cancerous with fakes that aren't even made with the exactitude robots can achieve. Just by bringing that example up you're abandoning your entire argument, so far as I'm seeing.
fd1Jeff t1_jdv3ae8 wrote
Reply to Why are humanoid robots so hard? by JayR_97
A book from the 1980s mentioned that it is easier to make a computer they can do 1 million calculations per second then it is to make a robot that could empty ashtrays. Believe it or not, even simple tasks like emptying ashtrays uses a decent amount of judgment that is difficult to program.
Falling-raven t1_jdv2djm wrote
Reply to comment by XavierRenegadeAngel_ in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
adeptus mechanicus?
johnmatrix84 t1_jdv1w5l wrote
Reply to People aged 16-29 in low-skilled jobs are 49% more likely to be surveilled at work. by PuzzBat9019
Cameras, cops, restraining orders, etc. do absolutely nothing to stop a criminal determined to harm you.
Used-Comment-5003 t1_jdv0e2k wrote
Reply to comment by Assembly_R3quired in Who do you think will be the winners and losers of the coming AI revolution? by tshirtguy2000
How do you think humans will make money if AI takes all the jobs?
RTSBasebuilder t1_jdv0can wrote
Reply to comment by imafraidofmuricans in Why are humanoid robots so hard? by JayR_97
Of course, a similar argument was laid out in the first half of the 2000s, when camera companies said "Why have a mediocre 2MP camera on your mobile phone and a compromised storage for music, and a cramped keyboard for emails, when you can get a quality digital camera, a PDA, an MP3 Player, and a mobile phone - all on your belt holster?"
Of course, we all know how that argument went - so long as you can pack enough features at a cheap enough price, and a "good enough" quality, people will take that option, especially if it already fits into their daily routine.
Shcrews t1_jduzn61 wrote
Reply to Why are humanoid robots so hard? by JayR_97
we were promised flying cars and robot slaves. I want my flying car and my robot slave!
808_Scalawag t1_jduz3to wrote
Reply to comment by comradelucyford in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
Yeah I was shocked to learned GPT-4 has been around since last October
phine-phurniture t1_jduwy5y wrote
Reply to comment by circleuranus in A Problem That Keeps Me Up At Night. by circleuranus
I would say you are thinking too much but you are spot on...
In an evolutionary sense we are pretty close to the best the monkey model can offer .... If and this is a big if. If we can step back from our instinctive responces and embrace more logic AI and humanity have a future together if not we have maybe 100 years before we fade to black.
datsmamail12 t1_jduwxh7 wrote
Reply to comment by speedywilfork in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
If it's only limitation is physics and mathematics,just throw it a bunch of papers of that and you'd still wouldn't be impressed by it. But when this technology finally becomes self aware,you'll be the one that said I knew it from the beginning that it was AGI. Do you even comprehend how minor of a problem is not knowing how to do mathematics when you can write novelty,do multitasking, understand every question and answer properly,this is AGI that hasn't been programmed to know what maths are. Id you take a kid make it grow up in a jungle,never show it maths or physics,only show it language,you think that it won't have intelligence? No,it just means that it hasn't been trained on these specific topics..it's just as intelligent as you and I are. Well not me,I'm an idiot,but you people at least.
Surur t1_jduv10t wrote
Reply to comment by doobie042 in Taxes in A.I dominated labour market by Newhereeeeee
> So basically you are saying the current system is a ponzi scheme endorsed by governments requiring more and more people to keep contributing into it?
The current system being the thing called humanity, yes.
If humans did not take care of their elderly this would not be an issue.
But in Sweden they had a solution for this - it's called Ättestupa.
[deleted] OP t1_jduug02 wrote
Reply to comment by Weltkaiser in Have deepfakes become so realistic that they can fool people into thinking they are genuine? by [deleted]
[removed]
martin_cy t1_jduu0b0 wrote
Reply to What happens if it turns out that being human is not that difficult to duplicate in a machine? What if we're just ... well ... copyable? by RamaSchneider
you might want read: Robin Hanson’s book The Age of Em
basically, he proposes we will be able to make replicas of our existing brains sooner than actually develop real AGI.. this was written before GPT.. so maybe his viewpoint has shifted.. but still, it is a very interesting deep dive into what could happen when minds can be copied and replicated to infinity..
doobie042 t1_jdutm3y wrote
Reply to comment by Surur in Taxes in A.I dominated labour market by Newhereeeeee
So basically you are saying the current system is a ponzi scheme endorsed by governments requiring more and more people to keep contributing into it?
[deleted] t1_jdutggi wrote
Reply to A Problem That Keeps Me Up At Night. by circleuranus
[removed]
circleuranus OP t1_jdut50n wrote
Reply to comment by phine-phurniture in A Problem That Keeps Me Up At Night. by circleuranus
For myself, accuracy isn't even of the greatest concern. Consider this, modern-day reporting requires "eyewitness" to the event or after-effects of the event/events. After all if no human witnesses the event, it's impossible to report on it other than from a historical context. Even if the event is only captured on a camera, a human must view the footage and develop a written history of it. Every step of the process is loaded with biases. Remove those biases and substitute it with a system that is 1000% more accurate with no inherent human biases and you have a digital God on your hands. Even if it were only 2-300% more accurate, it would still be the most reliable information dissemination system ever devised. CNN, Faux News, MSNBC....pointless.
Let's take the example of an everyday event such as a car crash. We come across the scene of an accident and we begin to build a model of what happened based on eyewitness testimony (notoriously unreliable) physical models and explanations of tire marks, impacts etc...and form an opinion based on probabilities. So Car A was likely speeding and t-boned Car B in the intersection.....but.
Enter the Oracle...using a real-time compilation of Tesla and other EV sensor data from nearby vehicles, footage from traffic cams, nearby atms, mobile phones, etc..etc. Shows that in fact Car A was traveling 4 miles under the speed limit and the driver of Car B was actually looking down at their radio at that precise moment and swerved into Car A's lane.
Mundane right? Now extrapolate that into trillions of data points. Google already knows what time I usually get out of bed from the moment I pick up my phone and activate the gyro. It probably knows what type of coffee I drink and how much. It knows what vehicle I drive and what time I leave the house. It knows what route I usually take. It knows what I'm likely wearing that day including pants and shirt sizes. It knows when I went to get my latest haircut, what type of razor I use to shave, where I go to lunch most days, what type of work I do.....and on and on and on. But it not only knows these things about me, but about everyone around me. And that's just Google/Amazon/Bing/Android/Apple etc. Consolidating all of that data and parsing it out to the level of the individual in real time? Terrifying.
You now have a system with trillions upon trillions of bits of data that understands an individual better than they understand themselves. Why wouldn't you trust such a system..? Your own mother doesn't know you as well as the Oracle. Besides the inherent trust in the information that will eventually develop, the moment the system makes even the tiniest most seemingly insignificant prediction with a minuscule accuracy rate, it will still be the most credible and powerful information system in the known universe. A system that will eventually garner blind trust in it's capabilities...and that's game over.
DrashkyGolbez t1_jdurkul wrote
Reply to comment by Skudge_Muffin in Why are humanoid robots so hard? by JayR_97
Or better, a robot that does all those things together, by compacting the customer journeys into a single one
DrashkyGolbez t1_jdurh5f wrote
Reply to comment by Skudge_Muffin in Why are humanoid robots so hard? by JayR_97
We are very good at handling tools, reaching with hands and throwing, but even that can be improved in a machine
Antropomorphic robots don't really make sense to me
[deleted] t1_jdurcvv wrote
Reply to Why are humanoid robots so hard? by JayR_97
[removed]
Surur t1_jdur5b3 wrote
Reply to comment by 4354574 in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
Sure, but my point is that while you may be conscious, you can not really objectively measure it in others, you can only believe when they say it or not.
So when the AI says it's conscious....
circleuranus OP t1_jduqpkp wrote
Reply to comment by 1714alpha in A Problem That Keeps Me Up At Night. by circleuranus
> became the unquestioned authority on any given topic, but the same is true of human pundits and professors alike.
There is no other system capable of such a thing like AI. Every other system we have is dependent on humans and the trust between humans and their biases. Humans actually seek information from other humans based solely on the commonality of their shared biases. Once you remove the human element, the system just "is". And such a system will be indistinguishable from magic or "the Gods".
snikZero t1_jduqibw wrote
Reply to comment by grundar in There Is Still Plenty We Can Do to Slow Climate Change by nastratin
The (b) table doesn't seem to show a reduction in temperatures even under the most optimistic case.
SSP1-1.9 shows the total observed temperature increasing (the lighter part of the bar), something like +0.4°C. The darker part is warming to date.
The two optimistic scenarios describe net zero by 2050, followed by net negative emissions into 2081-2100.
However your general point that warming can still be managed is likely correct.
ItsAConspiracy t1_jdv5h54 wrote
Reply to comment by AdamCohn in Why AI Should be Free and How it Can Improve Humanity by Antique-Ad-6055
The expensive part isn't the people, it's the mountain of GPU required to train the AI.