Recent comments in /f/Futurology

speedywilfork t1_jdw5jyl wrote

but that is the problem. it doesnt know intent, because intent is contextual. if i was standing in a coffee shop the question means one thing, on coffee plantation another, in a business conversation something totally different. so if you and i were discussing things to improve our business and i asked "what do you think about coffee" i am not asking about taste. AI can't distinguish these things.

1

grundar t1_jdw5bsu wrote

> The (b) table doesn't seem to show a reduction in temperatures even under the most optimistic case.

The (b) table is looking at the change relative to the late 1800s.

For change relative to other periods, look at p.14, Table SPM.1, Scenario SSP1-1.9, Best Estimate:

  • Near term (2021-2040): 1.5C
  • Mid-term (2041-2060): 1.6C
  • Long term (2081-2100): 1.4C
    i.e., 0.2C estimated temperature decrease between mid-term and long term intervals.

Note that SSP1-1.9 reaches net zero CO2 in ~2057 (p.13, Figure SPM.4), so the end of the mid-term interval. In other words, 20-40 years of increasingly net negative CO2 emissions are projected to result in 0.2C lower temperatures.

1

WoolyLawnsChi t1_jdw392l wrote

>the thickness of cancer cells' glycocalyx is one of the major parameters determining immune cell evasion and engineered immune cells work better if the glycocalyx is thinner.
>
>As a result of these findings the researchers then engineered immune cells with special enzymes on their surface to allow them to attach to and interact with the glycocalyx. They then found that these specialized immune cells were able to overcome the glycocalyx armor of cancer cells,

If I am reading this correctly the discovery isn't some new "drug"

instead it will make existing immunotherapies more effective

28

audioen t1_jdw2frs wrote

The trivial counterargument is that I can write a python program that says it is conscious, while being nothing such, as it is literally just a program that always prints these words.

It is too much of a stretch to regard a language model as conscious. It is deterministic -- it always predicts same probabilities for next token (word) if it sees the same input. It has no memory except words already in its context buffer. It has no ability to process more or less as task needs different amount of effort, but rather data flows from input to output token probabilities with the exact same amount of work each time. (With the exception that as input grows, its processing does take longer because the context matrix which holds the input becomes bigger. Still, it is computation flowing through the same steps, accumulating to the same matrices, but it does get applied to progressively more words/tokens that sit in the input buffer.)

However, we can probably design machine consciousness from the building blocks we have. We can give language models a scratch buffer they can use to store data and to plan their replies in stages. We can give them access to external memory so they don't have to memorize contents of wikipedia, they can just learn language and use something like Google Search just like the rest of us.

Language models can be simpler, but systems built from them can display planning, learning from experience via self-reflection of prior performance, long-term memory and other properties like that which at least sound like there might be something approximating a consciousness involved.

I'm just going to go out and say this: something like GPT-4 is probably like 200 IQ human when it comes to understanding language. The way we test it shows that it struggles to perform tasks, but this is mostly because of the architecture of directly going prompt to answer in a single step. The research right now is adding the ability to plan, edit and refine the replies from the AI, sort of like how a human makes multiple passes over their emails, or realizes after writing for a bit that they said something stupid or wrong and go back and erase the mistake. These are properties we do not currently grant our language models. Once we do, their performance will go through the roof, most likely.

0

starcraftre t1_jdw12xv wrote

So, the article takes the pop culture version of a Dyson Sphere (big solid ball kilometers thick), rather than the actual original definition (lots of really low-mass satellites/statites), and concludes it isn't viable.

Meanwhile, the original definition only "loses" 1 order of magnitude of energy collected (~10% coverage), while requiring 16 orders of magnitude less energy to build and place.

Not to mention the maintenance costs of a kilometer-thick shell is astronomical compared to a bunch of millimeter-thick mirror sails (which you can just replace for pennies).

232

RedditFuelsMyDepress t1_jdvzdtf wrote

Well I tried asking the coffee question from ChatGPT and this is what it said:

"As an artificial intelligence language model, I do not have personal preferences or opinions. However, I can provide some general information about coffee.

Coffee is a popular beverage enjoyed by millions of people around the world. It is made by brewing roasted coffee beans, which contain caffeine, a natural stimulant that can help increase alertness and reduce fatigue. Coffee also contains antioxidants and other beneficial compounds that may offer health benefits when consumed in moderation.

However, it is important to note that excessive consumption of coffee can have negative effects on health, such as increasing the risk of insomnia, anxiety, and digestive issues. Additionally, some people may be more sensitive to the effects of caffeine than others, and may experience negative side effects even with moderate consumption.

Overall, whether or not to consume coffee is a personal choice that depends on an individual's preferences, health status, and caffeine tolerance. It is always a good idea to consult with a healthcare professional to determine if coffee consumption is right for you."

In that first paragraph it does acknowledge the intent of your question, but just says that it isn't able to answer it. The facts about coffee being spit out I believe is just part of the directives given to ChatGPT.

1

GrandMasterPuba t1_jdvz5mj wrote

So it's summarizing Wikipedia.

This is a language model. It predicts the next best word given a string of preceding tokens based on a corpus of training data, of which includes a vast domain of information on string theory.

I will not truly be convinced that GPT can "understand concepts" until it can create new knowledge. I have yet to see any evidence of that.

1

PolychromeMan t1_jdvyclb wrote

>Yet as a general population we live in the safest era there ever was.

Probably due largely to more food and education and overall progress in most places that were previously less developed. I'm not sure it has much to do w/security measures or the lack of them. In any case, it's certainly nice that violence has been trending down for many decades in most places.

5