Recent comments in /f/Futurology

FuturologyBot t1_jdvpsdz wrote

The following submission statement was provided by /u/filosoful:


The math behind making a star-encompassing megastructure

In 1960, visionary physicist Freeman Dyson proposed that an advanced alien civilization would someday quit fooling around with kindergarten-level stuff like wind turbines and nuclear reactors and finally go big, completely enclosing their home star to capture as much solar energy as they possibly could.

They would then go on to use that enormous amount of energy to mine bitcoin, make funny videos on social media, delve into the deepest mysteries of the Universe, and enjoy the bounties of their energy-rich civilization.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/123p87f/would_building_a_dyson_sphere_be_worth_it_we_ran/jdvkkdz/

1

Kaz_55 t1_jdvozww wrote

>We ran the numbers

Did they though?

>In fact, humans have already begun this process; we have successfully lofted approximately 10,000–20,000 metric tons of material into orbit and beyond (and a good fraction of it has even stayed there). We just have 5,971,999,999,999,999,990,000 metric tons to go and we’re golden.

🧐

>An estimated 25 million meteoroids, micrometeoroids and other space debris enter Earth's atmosphere each day,[8] which results in an estimated 15,000 tonnes of that material entering the atmosphere each year.

https://en.wikipedia.org/wiki/Meteoroid

At the same time, earth loses about 5000 tons of Helium every year. Still, the earth has actually gained mass, not lost it.

Honestly, I don't see all that much evidence in the article of them "running the numbers" regarding Dyson spheres.

17

acutelychronicpanic t1_jdvog5q wrote

Current models like GPT4 specifically and purposefully avoid the appearance of having an opinion.

If you want to see it talk about the rich aroma and how coffee makes people feel, ask it to write a fictional conversation between two individuals.

It understands opinions, it just doesn't have one on coffee.

It'd be like me asking you how you "feel" about the meaning behind the equation 5x + 3y = 17

GPT4's strengths have little to do with spitting facts, and more to do with its ability to do reasoning and demonstrate understanding.

3

speedywilfork t1_jdvnee1 wrote

here is the problem. "intelligence" has nothing to do with regurgitating facts. it has to do with communication or intent. so if i ask you "what do you think about coffee" you know i am asking about preference. not the origin of coffee, or random facts about coffee. so if you were to ask a human "what do you think about coffee" and they spit out some random facts. then you say "no thats not what i mean, i want to know if you like it" then they spit out more random facts. would you think to yourself. "damn this guy is really smart." i doubt it. you would likely think "whats wrong with this guy". so if something can't identify intent and return a cogent answer. it isnt "intelligent".

1

Enzo-chan t1_jdvn42t wrote

Because sci-fi writers used to overestimate the timeline in which those inventions would become widespread.

It'll probably gonna happen, robots will one day become widespread, just won't be in our generation Z's youth.

1

FuturologyBot t1_jdvl6bs wrote

The following submission statement was provided by /u/BousWakebo:


The glycocalyx is developed with high levels of cell-surface mucins, which are thought to help protect the cancer cell from immune cell attack. However, up to now, there has been limited understanding of this barrier particularly as it relates to cell-based cancer immunotherapies.

These types of treatments involve removing immune cells from a patient, modifying them to seek and destroy cancer, and then putting them back into the patient’s body.

“We found that changes in the thickness of the barrier that were as small as 10 nanometers could affect the antitumor activity of our immune cells or the engineered cells used for immunotherapy,” said Sangwoo Park, a graduate student in Matthew Paszek’s Lab at Cornell University in Ithaca, New York.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/123ofc9/scientists_discover_how_cancer_cells_evade_immune/jdvglq6/

1

speedywilfork t1_jdvkqtr wrote

>Your examples are pretty bad and you should feel bad.

no they aren't. they illustrated my point perfectly. the AI didn't know what you were asking when you asked "do you live in a computer" because it doesn't understand that we are not asking if it is "alive" in the biological sense. we are asking if it is "alive" in the rhetorical sense. also it doesn't even understand the term "computer" because we an not asking about a literal macbook or PC. we are speaking rhetorically and use the term "computer" to mean something akin to "digital world" it failed to recognize the intended meaning of the words, therefore it failed.

>Approach the drive-through entrance: Look for signs indicating the entrance to the drive-through lane. These signs usually have arrows or the word "drive-through" on them. The entrance is typically located on one side of the restaurant, and you may need to drive around the building to find it.

another failure. what if i go to a concert in a field and there is a impromptu line to buy tickets. no lane markers, no window, no arrows, just a guy and a chair holding some paper. AI fails again.

1

filosoful OP t1_jdvkkdz wrote

The math behind making a star-encompassing megastructure

In 1960, visionary physicist Freeman Dyson proposed that an advanced alien civilization would someday quit fooling around with kindergarten-level stuff like wind turbines and nuclear reactors and finally go big, completely enclosing their home star to capture as much solar energy as they possibly could.

They would then go on to use that enormous amount of energy to mine bitcoin, make funny videos on social media, delve into the deepest mysteries of the Universe, and enjoy the bounties of their energy-rich civilization.

13

czk_21 t1_jdvjoeo wrote

oh really? when AI will be able to do everythign humans do-and much more efficiently, there is no reason for human to be in work anymore, its same concept which exists now and existed all the time, those who are better at the job replace those who are bad

for human society not to collapse, there needs to be some form of UBI, so I would say its basically guaranteed to happen, its just extended social benefits system which we have now

1

shr00mydan t1_jdvjen6 wrote

You are getting downvoted, but this is a fine question. Alan Turing himself answered it all the way back in 1950.

>Theological Objection: Thinking is a function of man's immortal soul. God has given an immortal soul to every man and woman, but not to any other animal or to machines. Hence no animal or machine can think.

>I am unable to accept any part of this, but will attempt to reply in theological terms... It appears to me that the argument quoted above implies a serious restriction of the omnipotence of the Almighty. It is admitted that there are certain things that He cannot do such as making one equal to two, but should we not believe that He has freedom to confer a soul on an elephant if He sees fit? We might expect that He would only exercise this power in conjunction with a mutation which provided the elephant with an appropriately improved brain to minister to the needs of this soul. An argument of exactly similar form may be made for the case of machines. It may seem different because it is more difficult to “swallow”. But this really only means that we think it would be less likely that He would consider the circumstances suitable for conferring a soul. The circumstances in question are discussed in the rest of this paper. In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates.

https://academic.oup.com/mind/article/LIX/236/433/986238

11

speedywilfork t1_jdvivi6 wrote

i am not impressed by it because everything it does, is expected. but it will never become self aware, because it has no ability to do so. self aware isnt something you learn, self aware is something you are. it is a trait, traits are assigned, not learned. even in evolution the environment is what assigns traits. AI have no environmental influence outside of their programmers. therefore the programmers would have to assign them the "self aware trait"

1

Outrageous_Nothing26 t1_jdvi8qx wrote

Well the truth, it doesn’t really matter, we could be living in a the magical world of harry potter and your anhedonia would do the same. I was just kidding with the skill issue but it sounds like depression, i had something similar happen but it’s just my unsolicited opinion and it doesn’t carry thar much weight

2

KnightOfNothing t1_jdvhuy8 wrote

you're not the first one to bring up "skill issue" when I've expressed my utter disappointment in all things real, is the human game of socialize work and sleep really so much fun for you guys? is this limited world lacking of anything fantastical really so impressive for all of you?

i've tried exceptionally hard to understand but all my efforts have been for naught. The only rational conclusion is that there's something necessary to the human experience i'm lacking but it's so fundamental no one would even think of mentioning it.

1

BousWakebo OP t1_jdvglq6 wrote

The glycocalyx is developed with high levels of cell-surface mucins, which are thought to help protect the cancer cell from immune cell attack. However, up to now, there has been limited understanding of this barrier particularly as it relates to cell-based cancer immunotherapies.

These types of treatments involve removing immune cells from a patient, modifying them to seek and destroy cancer, and then putting them back into the patient’s body.

“We found that changes in the thickness of the barrier that were as small as 10 nanometers could affect the antitumor activity of our immune cells or the engineered cells used for immunotherapy,” said Sangwoo Park, a graduate student in Matthew Paszek’s Lab at Cornell University in Ithaca, New York.

30