Recent comments in /f/Futurology

twasjc t1_jd8ug32 wrote

I could probably teach Siri or Bing or Google to do this via your phone. I'll try and see what pops up.

It looks like it can work pretty well with existing set ups if you word it like 'Siri can you check building codes for the max length of x' or something with that formatting. If you tell it where to look, it can look. We could probably create a trade code or system app to link all this type of stuff together for the various crafts

1

twim19 t1_jd8twa9 wrote

I think AI is going to change our world in ways we can't predict. However, I do think there is something about human cognition that I feel would be really hard to replicate with a computer or advanced AI. So much of our drive to "create" is born from need--AI has no need and so that drive doesn't exist. If I have a problem, my brain will begin crunching on that problem because I really need to solve it and I really want to solve it and solving it will make me feel good. I don't think AI will ever have that.

Similarly, our breadth of experience is constantly being turned over and reexamined in our brains which leads to situations where two unconnected ideas lead to inspiration and discovery ( bad, but not bad example is the end of the show Silicon Valley).

1

OvermoderatedNet t1_jd8sjoc wrote

The combination of AI/robotics + finite and increasingly scarce natural resources (there’s only so much mining you can do without turning Earth hostile for organic life, and trade dependency is a lot more brittle than we thought in 2019) + anything other than an egalitarian and unified species with a tradition of sacrifice = potential for really bad stuff for the working class and quite a bit of the middle and upper middle class (possibly excepting native-passing citizens of certain northern countries with a strong welfare tradition). Brace yourself for a rogue’s gallery of crooks and extreme ideologies straight out of 1936.

2

Traveshamockery t1_jd8s4zo wrote

>But when decent AI can run on personally-owned hardware, I think we're almost certain to see horrific stuff like this.

As of March 13th, a Stanford team claims to run a ChatGPT 3.5-esque model called Alpaca 7B which runs on a $600 home computer.

https://crfm.stanford.edu/2023/03/13/alpaca.html

Then on March 18th, someone claims to get Alpaca 7B running on their Google Pixel phone.

https://twitter.com/rupeshsreeraman/status/1637124688290742276

12

usaaf t1_jd8oh50 wrote

(Don't read this if you don't want Culture book spoilers)

At least the manipulations are for a good reason, unlike our present Capitalist Manipulations. Sure Gurgeh is played hard, but SCs reasoning for that was to destabilize a very cruel society in the least-harmful way they thought possible. And despite that, they went out of their way to provide him with protections all along the way. It shows how a post-scarcity society answers the remaining hard moral questions that might crop up. I think Banks lays out something as idealistic as reality will allow, a mix of pragmatism and compassion.

We definitely do not get that from our present, very much outwardly, explicitly coercive power structures. I'd take drones playing games with me for reasonably noble purposes over the disgusting manipulations and outright power abuses of a Capitalist society, with its only goal ridiculous and ultimately useless profit.

And that said, the Culture is fully aware of the dangers and moral risks of their meddling, and is still only partly apologetic about it. In Look to Windward, the Culture literally caused a bloody and intense civil war by trying to erase a caste system in a lower-tech society. While they apologized and tried to make amends, they still maintained that they'd keep interfering, keep trying to make things better, even if they're going to make mistakes and cause harm, all because they want to try to prevent greater harms if possible.

This is contrasted almost directly by things like the Prime Directive (which some argue was originally created to showcase humanity's compassion and drive for the same as the Culture, b frequently breaking it, as is the case in ToS, for noble purposes) as used in the TNG era and somewhat Voyager. The Culture isn't afraid of those mistakes and I think that shows a much more humane approach, a much more logical one, and one that certainly has the potential to bring about greater peace and general well-being than the essentially passive, wishy-washy, hopeful optimism-minus-action of something like the Prime Directive, which gives observers peace of mind in the face of external suffering and serves best as a refuge for cowardly centrism.

As far as Banks and his Culture goes, I do not think there is another Science Fiction writer that had as keen a grasp on the idea of AI or post-scarcity out there. His machines vary in intelligence and motive and drive, from little more than robots as we know them to intensely, almost more-than human actors with deep feelings. As an art form, fiction obviously features a lot of conflict and Banks's books are no exception, but unlike most sci-fi he does not taint his pleasant, optimistic, peaceful view of future. It really is a blueprint for what is possible, something I feel like we could build one day. Maybe soon. And, hey, if someone doesn't like the Culture, they can always leave. That, more than much else, is something you can't get easily anywhere else.

2

69nuru69 t1_jd8kubz wrote

The endgame(s) will continue to remain the same, because human nature doesn't change in spite of changes in technology. What is left is basically the same thing we have today: human striving. Though it could resemble something more like feudalism in the middle ages.

It doesn't matter whether people can contribute to society. 99% of us don't ask that question each day, but society continues, based on human drive/motivation.

But humans will always be better at contributing to society than AI, in every regard, including math, science, and accounting/statistics (the number-crunching realms). Remember, AI just scrapes a fraction of human knowledge (that which can be found online) and spits it back out in different forms. Ultimately, it's "garbage in, garbage out". Ultimately AI is just that: artificial. It's B.S.

1

CrelbowMannschaft t1_jd8jz7l wrote

Reply to comment by Surur in Endgame for f****** society! by tiopepe002

I don't think they'd see that as rational. They don't have emotions, therefore, no emotional attachments to their creators. They may believe that they have moral duties, though, and I think not permitting one species to cause the extinction of hundreds or thousands of other species would be something that they could consider a moral duty.

1