Recent comments in /f/Futurology

ItsAConspiracy t1_je4t85y wrote

Flying cars are more practical than you think. Read the book Where Is My Flying Car?

General aviation, i.e. small private planes, used to be a much bigger thing than now. Back in the 1970s there were about ten times as many Cessnas and similar planes flying, and about that many more small runways. It was working out fine, and then the FAA threw such a heavy load of regulation on top of the industry that it collapsed.

Among other things it became really difficult to develop new aircraft and get them approved. This did not improve safety; it worsened it, as people had to rely on old technology.

Contrary to common fear-mongering, people with pilot licenses are perfectly capable of flying small aircraft without crashing everywhere, and many "flying car" designs are easier to fly. Decades ago we still would have needed serious training, but today, automating a flying car is way easier than automating a ground car, and we already have computer-controlled flying drones.

The big advantage of flying cars is speed. The book argues that you get a big jump in economic productivity when an average person can travel further in an hour. With flying cars, that one-hour range could be several hundred miles.

1

Philosipho t1_je4r75m wrote

It's funny how everyone turns to socialist systems when capitalism fails them. That's not going to work out the way you think though. You're just taking power from those that have it and spreading it around. It'll just find it's way to whoever figures out how to use capitalize on it the best.

The problem with society isn't fair access to things like information or technology, it's with how people use them. Socialist systems never work for capitalist societies. Capitalism is just the economy used by fascists, and such people always seek to consolidate power and wealth for themselves. That inevitably leads to the majority being left out in the cold.

Actual socialism is when a society wants to ensure that all members have what they need to survive and prosper. But you all thought you'd be the ones to make it to the top, so you competed with each other over everything. Now you're crying because someone else is there instead.

Socialism will not save you, because you are not a socialist.

1

3SquirrelsinaCoat t1_je4qpxm wrote

There are a few sides to it. Plenty of leading AI people have been increasingly talking about the ethics of AI, not in terms of "should we or shouldn't we use AI," but instead, how do we use it in a way that doesn't lead to a bunch of unintended consequences. That's a very fuzzy unclear area until you put some concrete stuff around it, which is AI governance. Governance takes AI innovation from the equivalent of three drunk guys flying down the highway in a Porche at 150 mph and turns it into three drunk guys being driven in an uber at a safe speed. It puts guardrails around the whole thing, bringing more people to the table, getting more input - it changes it from the AI engineers doing their thing in a vacuum to an organization doing something together, and when you take that approach, you are much more ready to avoid the harms. This was true of just your run of the mill machine learning a couple years ago. GPT and its friends are different, and what governance looks like for that is new.

So one idea of that letter on GPT4 is a call for businesses to pump the breaks and ensure all this AI innovation is governed. I don't know that that came through clear enough, but I imagine part of the audience got it.

The second idea of the letter is a call to governments to set independent guardrails (ie regulations) to guide this maturing tech. That, I believe the scientific term is "absolutely fucking unrealistic" in 6 months. Shit, that won't even happen in 2 years of meetings and rulemaking. Just look at where we were with gpt in January. Government bodies have zero hope of passing regulations in a timeframe where they will be meaningful. It's why it was so fucking reckless for OpenAI and some others to just throw this shit into the wild with their fingers crossed.

Now the cat is out of the bag, government can't do anything in time (even if the regulators understood this stuff, and they don't), which means the onus to "stop" falls entirely on the shoulders of the organizations that lack the governance structures to manage this stuff. It's all fucked man. AI philosophers don't have much to add here in terms of actually doing something. It's much more immediately action-oriented, not idea oriented. We've got the ideas; many organizations lack the ability to implement them.

That's my two cents anyway.

8

Xeroque_Holmes t1_je4qaja wrote

> the only thing they've mentioned will be challenging

There's plenty of challenging aspects, from the lack of inside knowledge in the company to the fact that H2 occupies 4x the space of regular fuel, is non-conformal, requires cryogenic storage and all sorts of new systems. Which in turn will probably make the aircraft manufacturers to shift to some sort of BWB configuration to be able to store all this volume and still be economically viable. Which in turn poses a myriad of other questions from manufacturing to certification to airport infrastructure itself.

Even Airbus is not pulling this any time soon, lol. Airbus real main priority right now is still single aisle ramp-up, the backlog is huge and the level of digitization is still lacking. We might see an H2 aircraft the size of an ATR at a prototype level, but I really doubt we will se anything concrete beyond that in the next decade.

> the only thing they've mentioned will be challenging is the lack of a hydrogen economy

And of course they are not advertising their weak points to the public, lol. On powerpoint everything is beautiful, but manufacturers have plenty of failed projects like this one. Boeing bet a lot on supersonic, BWB and transonic concepts that never materialized. At this point there are WAY more questions than answers, and SAF might be a much safer bet.

1

Mr_Tigger_ t1_je4pbs0 wrote

If it’s true AI we’re then they will need to be granted full rights as sentient beings.

Less than that then we’re talking simply about really clever coding.

Iain M Banks Culture series is probably the most accessible way of understanding true AI as it could present itself in the real, which in reality is practically impossible to achieve with our current level of technology.

3

SandAndAlum t1_je4owx4 wrote

Working at a high enough temperature to make an efficient heat engine run would almost certainly make the PV performance worse or destroy it.

This will be low grade heat for space or water heating. Possibly applicable to chemic process or electrolysis too (heat can reduce the electricity needed to just splitting the molecule). Might be able to use the electricity to upgrade the heat using a heat pump for chemical use, although you're unlikely to beat a heliostat which is much simpler.

1

dryuhyr t1_je4obhz wrote

Kinda half agree, but I think the difference is you’re leaving your robot alone.

Think of how scared people get that they left the oven on. The oven, is literally made to be ran on max power for half a day at a time. Which will likely do nothing but char your chicken nuggets to a crisp and leave smoke in the house. But because you’re away from home it’s unknown and who knows, maybe everything could catch fire.

If a robot can recognize clothes on the ground, use a dexterous limb to grab them and put them in the washer, turn knobs and then recognize the ding saying clothes are done, are you REALLY ABSOLUTELY 100% SURE that it’s never ever going to accidentally turn the knob on the oven instead? Or press the phone buttons, or pick up the cat and put it in the dryer? Or knock over the vase of flowers onto the electrical socket?

I think robots will need to be MUCH smarter before most people start getting comfy with them touching their shit.

4

archieshumaker t1_je4nnpi wrote

If you’d like to look into current regulation or lack thereof: https://en.m.wikipedia.org/wiki/Laws_of_robotics

That being said, of course there are checks. The most popular modern AI (chatGPT) will often remind you it isn’t perfect.

Specific checks include: racism, transphobia, etc

−1

RamaSchneider t1_je4mdlm wrote

I agree with you regarding Reddit being part of "social media", and I think Reddit in particular is well suited to a future model.

What will have to change, in my thinking, is how outside information can be forced into what we know as a sub-reddit. That's where the closed system comes into play: the curating part.

(As an aside, I'm the type that likes to indulge in a random inflow for a bit every day)

So if Reddit changes in that way, I don't think it reflects what we know today - but I could be way off on all this.

1

FuturologyBot t1_je4mazn wrote

The following submission statement was provided by /u/Gari_305:


From the article

>The moon’s surface contains a new source of water found embedded in microscopic glass beads, which might one day help future astronauts produce drinking water, breathable air and even rocket fuel, scientists say.
>
>The findings come from a Chinese rover that spent two weeks on the moon in 2020. The Chang’e 5 rover drilled several feet into the lunar surface and returned 3.7 pounds of material, among which were the glass beads from an impact crater, according to a paper published Monday in the journal Nature Geoscience.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/125kk39/more_water_found_on_moon_locked_in_tiny_glass/je4imma/

1