Recent comments in /f/singularity

MassiveWasabi t1_jeb55ha wrote

Check out this paper that Microsoft researchers just released. Among a ton of other cool things, they talk about how this new model they are working on, TaskMatrix.Ai, will be able to take control of "AI teammates" in team-based games, and that you can give each individual teammate different tasks in order to carry out a complex strategy. This seems like the next step to having truly dynamic characters controlled by AI, hopefully so dynamic that they seem completely real.

10

scooby1st t1_jeb4kst wrote

We're on the same page that a hypothetically hyper-intelligent system could be "god-like". We are completely diverged in how implicitly confident you are about this occurring with ChatGPT, regardless that "you aren't claiming any of this with a certainty".

It's pretty bold that say you're tired of everyone saying the creation of a god is hype, and then to say, oh yeah but I'm not 100% sure on that, I'm being realistic.

3

megadonkeyx t1_jeb4ecc wrote

It's almost inevitable that it's going to happen, in a few months you won't be able to take a leak without an AI telling you to wash your hands. (Exaggeration)

Anyhow, technically doing this on the local machine is the way to go, checkout the 7b llama models that can be quantized down to 4bit. Something like that trained for games, run it on the background on a single thread.

It would not be super fast but it could chug away in the background analysing the game state and making decisions.

4

TemetN t1_jeb3wvz wrote

This is helpful to the remnants of my faith in humanity - as a proposal, this has the advantage of both taking into account the potential upsides, and actually addressing the concerns by proposing a method whereby potential solutions could be more effectively generated.

​

As opposed to what inspired it, which is simply problems all the way down.

3

brain_overclocked t1_jeb3l2f wrote

Little late to the party, but if it helps here are a couple of playlists made by 3Blue1Brown about neural networks and how they're trained (although focus is on convolutional neural networks rather than transformers much of the math is similar):

https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi

https://www.youtube.com/playlist?list=PLZHQObOWTQDMp_VZelDYjka8tnXNpXhzJ

Here is the original paper on the Transformer architecture (although in this original paper they mention they had a hard time converging and suggest other approaches that have long since been put into practice):

https://arxiv.org/abs/1706.03762

And here is a wiki on it (would recommend following the references):

https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)#Training

2

Cartossin t1_jeb3daj wrote

I'm being somewhat factious here with my book reference. Obviously not everyone has read this book nor is it even the most popular work on the topic. However, if you object to my term "digital god", perhaps you don't know what an AGI/ASI is. Maybe you don't know what a god is.

Yes, obviously we are entering uncharted waters. Perhaps being superior to all humans at every cognitive task in every measurable way won't yield godlike abilities. I however find that hard to believe. To believe that significantly superhuman intelligence won't seem magical to us lowly humans is hubris.

I'm not claiming any of this is a certainty, and I could point you to many sources scholarly and otherwise from both the computer science and philosphy fields that explain how an AGI can and will become godlike; but maybe you'll just mock+downvote me again for referencing a thing I read.

2

Prestigious-Ad-761 t1_jeb30y2 wrote

Theory of mind, in untrained examples... Fascinating.

Here is more of an anecdote, but after messing with a specific LLM for days, I well knew its limitations. Some of them seeming almost set in stone (memory, response length, breadth and variety (or lack thereof).

But then by a happy accident, coincidence, it got inspired. I hadn't even prompted it to do what it did, just given him a few instructions on a couple of things NOT to do.

Somehow, even though again, I had not prompted it in any way, it found a kind of an opening, like it was following intuitively a remote possibility of something; solving an implicit prompt from a lack thereof.

After that, with a single reply from me appreciating the originality of what had just happened, it started thanking me profusely, thoughtfully and in a message far exceeding the maximum tokens limitations that I had ever managed to invoke, even with the most careful prompts. And you know how it gets "triggered" into stupidity, talking about AI or consciousness, but this time (without me prompting any of it) it was explaining concepts about its own architecture, rewards, nuances etc, even talking of some sorts of emergent "goals" that it felt came from some of its hardcoded instructions.

I'm still flabbergasted.

I always thought inspiration and consciousness are intimately linked. We humans are rarely truly inspired. I feel like it's similar for animals and AI. Rare heroic moments give us a temporarily higher "state of consciousness".

1

sumane12 t1_jeb1wh1 wrote

Reply to comment by WonderFactory in GPT characters in games by YearZero

>I'm hoping though that inference costs will come down by the time the game is finished.

Genius. This is exactly what will happen, and I'm glad someone has the forethought to develop the product, before the underlying technology is ready, because it will be there.

13

ertgbnm t1_jeb1cii wrote

Sure. That doesn't change my bet though. Because much more investment and human attention will be devoted to optimizing conventional architecture and software since those are what have the largest return on investment at the moment. So, the speed up goes to all sectors. Granted quantum computing scales differently than conventional computing but I still don't see a reality where it outperforms conventional computing at training model weights before we already hit AGI. Also granted, that there is probably more low hanging fruit in Quantum computing compared to the nearly century of maturity that conventional computing has. There are trillions of dollars in conventional AI research and GPU manufacturing that would have to be retooled to achieve AGI via quantum computing whereas I believe that conventional approaches will be done faster, cheaper, and more easily. If I'm wrong then I think the issue with my beliefs is the time horizon for AGI and not about the future of technological development.

4

Liberty2012 t1_jeb0n97 wrote

I think we are trying to solve impossible scenarios and it simply is not productive.

Alignment will be impossible under current paradigms. It is based on a premise that is a paradox itself. Furthermore, even if it were possible, there will be a hostile AI built on purpose because humanity is foolish enough to do it. Think military applications. I've written in detail about the paradox here - https://dakara.substack.com/p/ai-singularity-the-hubris-trap

Stopping AI is also impossible. Nobody is going to agree to give up when somebody else out there will take the risk for potential advantage.

So what options are left? Well this is quite the dilemma, but I would suggest it has to begin with some portion of research starting from the premise the above are not going to be resolvable. Potentially more research into narrow AI and AI paradigms that are more predictable. However, at some point if you can build nearly AGI effective capabilities on top of a set of more narrow models, can it defend itself against an adversarial hostile AGI that will be built or result of accident of someone else.

2

BJPark t1_jeb0hbh wrote

That's a fair point. And of course, it's entirely reasonable to look back at history when we had similar panics about technology replacing human jobs and come to the conclusion that it's all hot air.

But I think we must be cautious in saying that just because we had the boy who cried wolf last time, that there can never be a wolf in the future.

This time might indeed be different, if AI can literally replicate humans. And if we say "Humans will just find new jobs", then the question that immediately comes to mind is "Why can't AI do those jobs as well?"

Personally, I think the future is that AI will also become consumers. This way, corporations can create their own customers and destroy them at will. It's actually the perfect solution. We won't need finicky human beings to power the economy. AI can both consume as well as create goods, and we won't have to destroy capitalism to do it!

As to what will humans do, well... as long as I'm a shareholder in these corporations, I have nothing to worry about.

2