Recent comments in /f/Futurology
mirddes t1_jdw6bna wrote
Reply to comment by mandru in People aged 16-29 in low-skilled jobs are 49% more likely to be surveilled at work. by PuzzBat9019
my grandmother never got beaten, my father got smacked once. and so did i.
you're talking about the 70s, women had been voting in many countries for many decades by then.
[deleted] t1_jdw65w9 wrote
Reply to comment by Kahoots113 in Would building a Dyson sphere be worth it: We ran the numbers. by filosoful
[removed]
mandru t1_jdw5m2a wrote
Reply to comment by mirddes in People aged 16-29 in low-skilled jobs are 49% more likely to be surveilled at work. by PuzzBat9019
I don't really think you know how bad women had it 50 yeas ago.
One of my grandma's words of wisdom ware " He was a good husband, he hardly ever beat her"
50 years ago women had no rights to speak of.
speedywilfork t1_jdw5jyl wrote
Reply to comment by RedditFuelsMyDepress in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
but that is the problem. it doesnt know intent, because intent is contextual. if i was standing in a coffee shop the question means one thing, on coffee plantation another, in a business conversation something totally different. so if you and i were discussing things to improve our business and i asked "what do you think about coffee" i am not asking about taste. AI can't distinguish these things.
grundar t1_jdw5bsu wrote
Reply to comment by snikZero in There Is Still Plenty We Can Do to Slow Climate Change by nastratin
> The (b) table doesn't seem to show a reduction in temperatures even under the most optimistic case.
The (b) table is looking at the change relative to the late 1800s.
For change relative to other periods, look at p.14, Table SPM.1, Scenario SSP1-1.9, Best Estimate:
- Near term (2021-2040): 1.5C
- Mid-term (2041-2060): 1.6C
- Long term (2081-2100): 1.4C
i.e., 0.2C estimated temperature decrease between mid-term and long term intervals.
Note that SSP1-1.9 reaches net zero CO2 in ~2057 (p.13, Figure SPM.4), so the end of the mid-term interval. In other words, 20-40 years of increasingly net negative CO2 emissions are projected to result in 0.2C lower temperatures.
GI_X_JACK t1_jdw4vc3 wrote
Short answer: no
long answer: Its not remotely feasible. Its a pipe dream at best.
Glittering_Cow945 t1_jdw47qs wrote
there is not enough mass in all of the planets put together to make a shell of any known material strong enough to stay intact, even given transmutation of elements.
MpVpRb t1_jdw46r2 wrote
As usual, good research, misleading headline
A better headline would be .. Scientists learn a bit more about how cancer cells evade immune system
The headline implies the problem is solved
WoolyLawnsChi t1_jdw392l wrote
Reply to comment by BousWakebo in Scientists discover how cancer cells evade immune system by BousWakebo
>the thickness of cancer cells' glycocalyx is one of the major parameters determining immune cell evasion and engineered immune cells work better if the glycocalyx is thinner.
>
>As a result of these findings the researchers then engineered immune cells with special enzymes on their surface to allow them to attach to and interact with the glycocalyx. They then found that these specialized immune cells were able to overcome the glycocalyx armor of cancer cells,
If I am reading this correctly the discovery isn't some new "drug"
instead it will make existing immunotherapies more effective
[deleted] t1_jdw37dx wrote
[removed]
mirddes t1_jdw30dl wrote
Reply to comment by mandru in People aged 16-29 in low-skilled jobs are 49% more likely to be surveilled at work. by PuzzBat9019
> we live in the safest era there ever was
unless you're woman who wants to talk about woman's rights
audioen t1_jdw2frs wrote
Reply to comment by Surur in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
The trivial counterargument is that I can write a python program that says it is conscious, while being nothing such, as it is literally just a program that always prints these words.
It is too much of a stretch to regard a language model as conscious. It is deterministic -- it always predicts same probabilities for next token (word) if it sees the same input. It has no memory except words already in its context buffer. It has no ability to process more or less as task needs different amount of effort, but rather data flows from input to output token probabilities with the exact same amount of work each time. (With the exception that as input grows, its processing does take longer because the context matrix which holds the input becomes bigger. Still, it is computation flowing through the same steps, accumulating to the same matrices, but it does get applied to progressively more words/tokens that sit in the input buffer.)
However, we can probably design machine consciousness from the building blocks we have. We can give language models a scratch buffer they can use to store data and to plan their replies in stages. We can give them access to external memory so they don't have to memorize contents of wikipedia, they can just learn language and use something like Google Search just like the rest of us.
Language models can be simpler, but systems built from them can display planning, learning from experience via self-reflection of prior performance, long-term memory and other properties like that which at least sound like there might be something approximating a consciousness involved.
I'm just going to go out and say this: something like GPT-4 is probably like 200 IQ human when it comes to understanding language. The way we test it shows that it struggles to perform tasks, but this is mostly because of the architecture of directly going prompt to answer in a single step. The research right now is adding the ability to plan, edit and refine the replies from the AI, sort of like how a human makes multiple passes over their emails, or realizes after writing for a bit that they said something stupid or wrong and go back and erase the mistake. These are properties we do not currently grant our language models. Once we do, their performance will go through the roof, most likely.
WWGHIAFTC t1_jdw2dim wrote
Reply to A Problem That Keeps Me Up At Night. by circleuranus
But when will it be able to answer "The Last Question"
Surur t1_jdw2bc1 wrote
Reply to comment by speedywilfork in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
The fault can be on either side.
MpVpRb t1_jdw1wyu wrote
Reply to People aged 16-29 in low-skilled jobs are 49% more likely to be surveilled at work. by PuzzBat9019
Low skilled workers often do poor and minimal effort work. It's not surprising that managers want to watch them
mandru t1_jdw1pm0 wrote
Reply to comment by PolychromeMan in People aged 16-29 in low-skilled jobs are 49% more likely to be surveilled at work. by PuzzBat9019
Resources availability and education are the main drivers for peace . Most people will be content when they have their food and shelter made available with ease.
StraightOven4697 t1_jdw13fb wrote
There is absolutely zero point in thinking of this right now. Bare minimum we'd have to be a multi-planetary species to even consider it.
starcraftre t1_jdw12xv wrote
So, the article takes the pop culture version of a Dyson Sphere (big solid ball kilometers thick), rather than the actual original definition (lots of really low-mass satellites/statites), and concludes it isn't viable.
Meanwhile, the original definition only "loses" 1 order of magnitude of energy collected (~10% coverage), while requiring 16 orders of magnitude less energy to build and place.
Not to mention the maintenance costs of a kilometer-thick shell is astronomical compared to a bunch of millimeter-thick mirror sails (which you can just replace for pennies).
JackD4wkins t1_jdw120z wrote
If we can genetically change immune cells to fight cancer, why not simply change cancer cells to die instead? CINDELA got it right. Cut out the middle man
Tobacco_Bhaji t1_jdw02nh wrote
Oh, Ars Technica ran the numbers. Well, that settles it.
Joe Rogan's opinion would be equally valuable.
RedditFuelsMyDepress t1_jdvzdtf wrote
Reply to comment by speedywilfork in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
Well I tried asking the coffee question from ChatGPT and this is what it said:
"As an artificial intelligence language model, I do not have personal preferences or opinions. However, I can provide some general information about coffee.
Coffee is a popular beverage enjoyed by millions of people around the world. It is made by brewing roasted coffee beans, which contain caffeine, a natural stimulant that can help increase alertness and reduce fatigue. Coffee also contains antioxidants and other beneficial compounds that may offer health benefits when consumed in moderation.
However, it is important to note that excessive consumption of coffee can have negative effects on health, such as increasing the risk of insomnia, anxiety, and digestive issues. Additionally, some people may be more sensitive to the effects of caffeine than others, and may experience negative side effects even with moderate consumption.
Overall, whether or not to consume coffee is a personal choice that depends on an individual's preferences, health status, and caffeine tolerance. It is always a good idea to consult with a healthcare professional to determine if coffee consumption is right for you."
In that first paragraph it does acknowledge the intent of your question, but just says that it isn't able to answer it. The facts about coffee being spit out I believe is just part of the directives given to ChatGPT.
GrandMasterPuba t1_jdvz5mj wrote
Reply to comment by izumi3682 in You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills - Yuval Harari on threats to humanity posed by AI by izumi3682
So it's summarizing Wikipedia.
This is a language model. It predicts the next best word given a string of preceding tokens based on a corpus of training data, of which includes a vast domain of information on string theory.
I will not truly be convinced that GPT can "understand concepts" until it can create new knowledge. I have yet to see any evidence of that.
MadDocsDuck t1_jdvyyva wrote
Reply to comment by RavenWolf1 in Printed organs becoming more useful than bio ones by TheRappingSquid
Yeah but what I'm saying is that we won't get it
PolychromeMan t1_jdvyclb wrote
Reply to comment by mandru in People aged 16-29 in low-skilled jobs are 49% more likely to be surveilled at work. by PuzzBat9019
>Yet as a general population we live in the safest era there ever was.
Probably due largely to more food and education and overall progress in most places that were previously less developed. I'm not sure it has much to do w/security measures or the lack of them. In any case, it's certainly nice that violence has been trending down for many decades in most places.
izumi3682 OP t1_jdw6h52 wrote
Reply to comment by Ichipurka in You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills - Yuval Harari on threats to humanity posed by AI by izumi3682
>they all just go away eventually anyway.
No, they don't. But that is a discussion for a different venue. Not rslashfuturology.