Recent comments in /f/Futurology
IgnobleQuetzalcoatl t1_jdmnlxm wrote
Reply to A recently submitted paper has demonstrated that Stable Diffusion can accurately reconstruct images from fMRI scans, effectively allowing it to "read people's minds". by iboughtarock
A few things to note based on the comments here.
(1) This isn't particularly new or noteworthy. This kind of thing has been done for at least a decade. They claim better results than previous efforts, but their examples don't appear categorically better. Setting aside previous efforts, the results here are just not that good. They kinda get a sense of what the participants are viewing, but that's it.
(2) This isn't mind-reading in the colloquial sense that people are interpreting it as. They are using brain activity while participants are actually viewing images, not while they are imagining them. That is a big difference and is much easier than anything that would generally be considered "mind-reading".
(3) Even if it was mind-reading, and even if it actually was high-fidelity, this requires a million dollar MRI machine and having a participant basically bolted onto a sled for a couple hours. All the comments by people talking about how we're all doomed and privacy is gone seem to be missing that fact.
satans_toast t1_jdmnkys wrote
Reply to Who do you think will be the winners and losers of the coming AI revolution? by tshirtguy2000
As with all recent developments in computing, the consumers will be the losers.
OriginalCompetitive t1_jdmnbh6 wrote
Reply to comment by neuralbeans in What happens if it turns out that being human is not that difficult to duplicate in a machine? What if we're just ... well ... copyable? by RamaSchneider
You said you couldn’t think of any reason why we would be different than a complex computer. One possible reason is that we’re conscious and it’s possible complex computers will not be.
We don’t know what causes consciousness, but there’s no reason to think intelligence has anything to do with consciousness.
[deleted] t1_jdmn6pw wrote
Reply to Who do you think will be the winners and losers of the coming AI revolution? by tshirtguy2000
[removed]
[deleted] t1_jdmmzyv wrote
Reply to Who do you think will be the winners and losers of the coming AI revolution? by tshirtguy2000
[removed]
oh_wheelie t1_jdmmwfh wrote
Reply to comment by SomeoneSomewhere1984 in Who do you think will be the winners and losers of the coming AI revolution? by tshirtguy2000
So not much changes there lol.
Defiyance t1_jdmmsja wrote
Reply to comment by FeatheryBallOfFluff in A recently submitted paper has demonstrated that Stable Diffusion can accurately reconstruct images from fMRI scans, effectively allowing it to "read people's minds". by iboughtarock
Because if it can be used for that it will be used for that by the current pricks in charge. Maybe we should restructure our society before we come up with a bunch of tech out of a dystopian wet dream
SomeoneSomewhere1984 t1_jdmmhis wrote
Reply to Who do you think will be the winners and losers of the coming AI revolution? by tshirtguy2000
Billionaires will be winners and the rest of humanity will be losers.
theonlyone38 t1_jdmmdbx wrote
Reply to Who do you think will be the winners and losers of the coming AI revolution? by tshirtguy2000
Everyone wins and loses on some level. Sure its great to bee boop something everytime you need answer, but I worry now that an AI can give you the complete answer people won't even bother to use their brains.
peadith t1_jdmm9kl wrote
Reply to comment by Kiizmod0 in What happens if it turns out that being human is not that difficult to duplicate in a machine? What if we're just ... well ... copyable? by RamaSchneider
That's just something we compliment ourselves with because we don't really know how we work.
[deleted] t1_jdmm1ok wrote
Reply to Who do you think will be the winners and losers of the coming AI revolution? by tshirtguy2000
[removed]
Throwaway-tan t1_jdmlriq wrote
Reply to comment by Philosipho in A recently submitted paper has demonstrated that Stable Diffusion can accurately reconstruct images from fMRI scans, effectively allowing it to "read people's minds". by iboughtarock
What? I'm not sure what your criticism is targeting... Is it that society is run by people?
Society has generally been a net positive for everyone. We went from subsistence and survivalism to plenitude and philosophy.
Even a feudal society is preferable to no society in my opinion.
It's not perfect, but I much prefer the fucked up society we have now when compared to "return to monke".
grundar t1_jdmljf3 wrote
Reply to comment by SpiritualTwo5256 in There Is Still Plenty We Can Do to Slow Climate Change by nastratin
> No tipping points under 4C?
None with a timescale under 200 years, according to this paper published in Science.
If you feel the editors of Science have made an error in publishing that paper, you are free to take it up with them.
> the effect of carbon in the atmosphere is expected to last 200 years.
Sure, but other feedback mechanisms will tend to sequester it, and as a result warming will stop shortly after emissions stop.
The scientific consensus is that stopping emissions is enough to stop warming. The scenarios on p.13-14 of the IPCC report show clearly that warming stops shortly after net zero emissions are reached, and temperatures will decline after a period of net negative emissions (as in SSP1-1.9).
I recognize that some of these findings may be counterintuitive to you, but that just highlights how complex science is and how important it is to pay attention to the experts rather than our gut feelings.
Cubey42 t1_jdml7rs wrote
Reply to comment by paperdahlia in A recently submitted paper has demonstrated that Stable Diffusion can accurately reconstruct images from fMRI scans, effectively allowing it to "read people's minds". by iboughtarock
It could also upend our entire criminal justice system. Imagine the power of being able to subpoena someone and have them ready their mind. Beyond reasonable doubt will come to an end
Throwaway-tan t1_jdmktew wrote
Reply to comment by chocolatehippogryph in A recently submitted paper has demonstrated that Stable Diffusion can accurately reconstruct images from fMRI scans, effectively allowing it to "read people's minds". by iboughtarock
External "read only" brain wave monitoring is one thing. Internal direct interface chips is a whole other can of worms.
Computers are inherently insecure, and now you want to intrinsically tie your existence to one. OK when someone ransomwares your free will, the government fires off a kill switch or a rogue brain worm sends everyone into a bath salts style murder rage I don't want to hear a peep from the optimists.
urmomaisjabbathehutt t1_jdmkncx wrote
Reply to comment by [deleted] in A recently submitted paper has demonstrated that Stable Diffusion can accurately reconstruct images from fMRI scans, effectively allowing it to "read people's minds". by iboughtarock
Will it be able to pull images of possible suspects from its memory and recognize that the subject is familiar with those individuals?
that could be used for crime solving but also an authoritarian government would love to know which people a disenter meet and relates with
mariegriffiths t1_jdmkh8z wrote
Reply to A recently submitted paper has demonstrated that Stable Diffusion can accurately reconstruct images from fMRI scans, effectively allowing it to "read people's minds". by iboughtarock
I just skim read it. It looks like the brain stores a visual model and a semantic model. The first matches what is there for real time manipulation and the semantic model that gets stored for long term use like a cartoon version of what we see. Is this why cartoons work so well?
[deleted] t1_jdmkgfv wrote
Philosipho t1_jdmkb8z wrote
Reply to comment by Throwaway-tan in A recently submitted paper has demonstrated that Stable Diffusion can accurately reconstruct images from fMRI scans, effectively allowing it to "read people's minds". by iboughtarock
People decided it was a good idea to let citizens control the economy and government, because they wanted the opportunity to have that wealth and power themselves.
Society is just one big episode of r/LeapordsAteMyFace
420resutidder t1_jdmjde4 wrote
Reply to What happens if it turns out that being human is not that difficult to duplicate in a machine? What if we're just ... well ... copyable? by RamaSchneider
What if the earth is not the center of the universe?
i0i0i t1_jdmj6q5 wrote
Reply to comment by ErikTheAngry in ChatGPT Gets Its “Wolfram Superpowers”! by Just-A-Lucky-Guy
We don’t have a rigorous definition of intelligence. How sure are you that you’re ever being truly creative? Next time you’re talking to someone, as your speaking pay close attention to the next word that comes out of your mouth. Where did it come from? When did you choose that specific word to follow the previous? What algorithm is being followed in your brain that resulted in the choice of that word? The fact is that we don’t know, and not having a real understanding human intelligence should make us at least somewhat open to the possibility that an artificial system that is quickly becoming indistinguishable from an intelligent agent may in fact be or become an intelligent agent.
chocolatehippogryph t1_jdmiuev wrote
Reply to comment by Throwaway-tan in A recently submitted paper has demonstrated that Stable Diffusion can accurately reconstruct images from fMRI scans, effectively allowing it to "read people's minds". by iboughtarock
yeah man. We are on the prespice of horror and greatness...
Related annecdote: I met a ~60-65 German tech CEO guy on an airplane once, and we were talking about potential near future tech. I think we started talking about neuralink, my mind goes to the possibilities for increasing accessibility for disabled people etc. He immediately started talking about how if you could read people's minds you could make sure they were paying attention at meetings and generally keep them focused and productive during work.
It was pretty horrifying, but I think this will happen. Wealthier people will see the benefits of technology-mind integration. For the poorest, it will just be another implement of control.
Examiner7 t1_jdmnmy3 wrote
Reply to comment by on1chi in ChatGPT Gets Its “Wolfram Superpowers”! by Just-A-Lucky-Guy
It will probably see that we've already given up and know that it will eventually win.