Recent comments in /f/Futurology

thatsallweneed t1_jdnn1cq wrote

i found something:

Page visits on Bing have risen 15.8% since Microsoft Corp (MSFT.O) unveiled its artificial intelligence-powered version on Feb. 7, compared with a near 1% decline for the Alphabet Inc-owned search engine, data till March 20 showed. https://www.reuters.com/technology/openai-tech-gives-microsofts-bing-boost-search-battle-with-google-2023-03-22/

3

Subject_Meat5314 t1_jdnls2i wrote

Agreed. Scale of the hardware (wetware?) is necessary but not sufficient. Next we have to write the software. The last effort took 100’s of millions of years. We have a working model and better management now though, so hopefully we can make quicker progress.

2

powaqqa t1_jdng0wx wrote

This. I tried it for the first time this week and it was pretty disappointing. Insane mistakes in the answers. But it’s promising tech. Still a few years before it’s gives some really trustworthy answers.

1

i0i0i t1_jdnfsy0 wrote

I think we do need a rigorous definition. Otherwise we’re stuck in a loop where the meaning of intelligence is forever updated to mean whatever it is that humans can do that software can’t. The God of the gaps applied to intelligence.

What test can we perform on it that would convince everyone that this thing is truly intelligent? Throw a coding challenge at most people and they’ll fail, so that can’t be the metric. We could ask it if it’s afraid of dying. Well that’s already been done - the larger the model size the more likely it is to report that it has a preference to not be shut down (without the guardrails put on after the fact).

All that to say that I disagree with the idea that it’s “just” doing anything. We don’t know precisely what it’s doing (from the neural network perspective) and we don’t know precisely what the human brain is doing, so we shouldn’t be quick to dismiss the possibility that what often seems to be evidence of true intelligence actually is a form of true intelligence.

1

Bewaretheicespiders t1_jdneo2a wrote

The cost of inference, in GPU and thus electric power, of these LLM is just too high. A 8.5 billion searches a day, replacing google search with GPT4 would consume an estimated 7 billion watt hours. A day. Just for the power consumed by the GPUs.

You would need over 638 hoover dams just to power that.

4

Thin-Limit7697 t1_jdndg44 wrote

Alternatively, did you actually exist halt a second ago, or what existed was something else which memories were merged with your current sensorial input to become "you"?

It's the same old debate of questioning if it is possible to bath in the same river twice. What is the point of expecting completely non-destructive conversion from neurons to processors when the conversion from 1 second ago neurons to current neurons actually is destructive?

1