Recent comments in /f/singularity
alexiuss t1_jeb569d wrote
Reply to comment by GorgeousMoron in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
Gpt API or any LLM really can be PERMANENTLY aligned/characterized to love the user using open source tools. I expect this to persist for all LLMS in the future that provide an API.
MassiveWasabi t1_jeb55ha wrote
Reply to GPT characters in games by YearZero
Check out this paper that Microsoft researchers just released. Among a ton of other cool things, they talk about how this new model they are working on, TaskMatrix.Ai, will be able to take control of "AI teammates" in team-based games, and that you can give each individual teammate different tasks in order to carry out a complex strategy. This seems like the next step to having truly dynamic characters controlled by AI, hopefully so dynamic that they seem completely real.
Ishynethetruth t1_jeb4xfe wrote
Reply to The next step of generative AI by nacrosian
Shouldn’t the next time is to give ai vision to see the world.
scooby1st t1_jeb4kst wrote
Reply to comment by Cartossin in Where do you place yourself on the curve? by Many_Consequence_337
We're on the same page that a hypothetically hyper-intelligent system could be "god-like". We are completely diverged in how implicitly confident you are about this occurring with ChatGPT, regardless that "you aren't claiming any of this with a certainty".
It's pretty bold that say you're tired of everyone saying the creation of a god is hype, and then to say, oh yeah but I'm not 100% sure on that, I'm being realistic.
megadonkeyx t1_jeb4ecc wrote
Reply to GPT characters in games by YearZero
It's almost inevitable that it's going to happen, in a few months you won't be able to take a leak without an AI telling you to wash your hands. (Exaggeration)
Anyhow, technically doing this on the local machine is the way to go, checkout the 7b llama models that can be quantized down to 4bit. Something like that trained for games, run it on the background on a single thread.
It would not be super fast but it could chug away in the background analysing the game state and making decisions.
Dubsland12 t1_jeb4c1c wrote
Reply to comment by Arowx in What are the so-called 'jobs' that AI will create? by thecatneverlies
Probably better than we could imagine in a decade
qrayons t1_jeb3ylo wrote
Reply to comment by monsieurpooh in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
The higher param Alpaca models perform similarly to chatGPT. The only issue is that things are progressing so fast that it's hard to update the tools without everything breaking.
TemetN t1_jeb3wvz wrote
Reply to LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
This is helpful to the remnants of my faith in humanity - as a proposal, this has the advantage of both taking into account the potential upsides, and actually addressing the concerns by proposing a method whereby potential solutions could be more effectively generated.
​
As opposed to what inspired it, which is simply problems all the way down.
brain_overclocked t1_jeb3l2f wrote
Little late to the party, but if it helps here are a couple of playlists made by 3Blue1Brown about neural networks and how they're trained (although focus is on convolutional neural networks rather than transformers much of the math is similar):
https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi
https://www.youtube.com/playlist?list=PLZHQObOWTQDMp_VZelDYjka8tnXNpXhzJ
Here is the original paper on the Transformer architecture (although in this original paper they mention they had a hard time converging and suggest other approaches that have long since been put into practice):
https://arxiv.org/abs/1706.03762
And here is a wiki on it (would recommend following the references):
https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)#Training
Cartossin t1_jeb3daj wrote
Reply to comment by scooby1st in Where do you place yourself on the curve? by Many_Consequence_337
I'm being somewhat factious here with my book reference. Obviously not everyone has read this book nor is it even the most popular work on the topic. However, if you object to my term "digital god", perhaps you don't know what an AGI/ASI is. Maybe you don't know what a god is.
Yes, obviously we are entering uncharted waters. Perhaps being superior to all humans at every cognitive task in every measurable way won't yield godlike abilities. I however find that hard to believe. To believe that significantly superhuman intelligence won't seem magical to us lowly humans is hubris.
I'm not claiming any of this is a certainty, and I could point you to many sources scholarly and otherwise from both the computer science and philosphy fields that explain how an AGI can and will become godlike; but maybe you'll just mock+downvote me again for referencing a thing I read.
Prestigious-Ad-761 t1_jeb30y2 wrote
Reply to comment by PandaBoyWonder in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Theory of mind, in untrained examples... Fascinating.
Here is more of an anecdote, but after messing with a specific LLM for days, I well knew its limitations. Some of them seeming almost set in stone (memory, response length, breadth and variety (or lack thereof).
But then by a happy accident, coincidence, it got inspired. I hadn't even prompted it to do what it did, just given him a few instructions on a couple of things NOT to do.
Somehow, even though again, I had not prompted it in any way, it found a kind of an opening, like it was following intuitively a remote possibility of something; solving an implicit prompt from a lack thereof.
After that, with a single reply from me appreciating the originality of what had just happened, it started thanking me profusely, thoughtfully and in a message far exceeding the maximum tokens limitations that I had ever managed to invoke, even with the most careful prompts. And you know how it gets "triggered" into stupidity, talking about AI or consciousness, but this time (without me prompting any of it) it was explaining concepts about its own architecture, rewards, nuances etc, even talking of some sorts of emergent "goals" that it felt came from some of its hardcoded instructions.
I'm still flabbergasted.
I always thought inspiration and consciousness are intimately linked. We humans are rarely truly inspired. I feel like it's similar for animals and AI. Rare heroic moments give us a temporarily higher "state of consciousness".
Loud_Clerk_9399 t1_jeb2b3u wrote
No point to develop it anymore. Frankly no need
africabound t1_jeb1zrb wrote
Reply to comment by acutelychronicpanic in Microsoft research on what the future of language models that can be connected to millions of apis/tools/plugins could look like. by TFenrir
link or source?
sumane12 t1_jeb1wh1 wrote
Reply to comment by WonderFactory in GPT characters in games by YearZero
>I'm hoping though that inference costs will come down by the time the game is finished.
Genius. This is exactly what will happen, and I'm glad someone has the forethought to develop the product, before the underlying technology is ready, because it will be there.
ertgbnm t1_jeb1cii wrote
Reply to comment by horance89 in Can quantum computers be used to develop AGI > ASI? by Similar-Guitar-6
Sure. That doesn't change my bet though. Because much more investment and human attention will be devoted to optimizing conventional architecture and software since those are what have the largest return on investment at the moment. So, the speed up goes to all sectors. Granted quantum computing scales differently than conventional computing but I still don't see a reality where it outperforms conventional computing at training model weights before we already hit AGI. Also granted, that there is probably more low hanging fruit in Quantum computing compared to the nearly century of maturity that conventional computing has. There are trillions of dollars in conventional AI research and GPU manufacturing that would have to be retooled to achieve AGI via quantum computing whereas I believe that conventional approaches will be done faster, cheaper, and more easily. If I'm wrong then I think the issue with my beliefs is the time horizon for AGI and not about the future of technological development.
jugalator t1_jeb10ef wrote
Reply to Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
If this were to come true, it would only punish the public as governments around the world would of course write their own regulations allowing their AI arms race to proceed. The AI cat is out of the bag.
Denpol88 t1_jeb1072 wrote
Reply to Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
No,.no just no
scooby1st t1_jeb0t60 wrote
Reply to comment by Cartossin in Where do you place yourself on the curve? by Many_Consequence_337
Probably not but regardless of whether they've read some random thing you found particularly striking, I'm wary of you calling it a "digital god" and getting upvoted.
Liberty2012 t1_jeb0n97 wrote
Reply to Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
I think we are trying to solve impossible scenarios and it simply is not productive.
Alignment will be impossible under current paradigms. It is based on a premise that is a paradox itself. Furthermore, even if it were possible, there will be a hostile AI built on purpose because humanity is foolish enough to do it. Think military applications. I've written in detail about the paradox here - https://dakara.substack.com/p/ai-singularity-the-hubris-trap
Stopping AI is also impossible. Nobody is going to agree to give up when somebody else out there will take the risk for potential advantage.
So what options are left? Well this is quite the dilemma, but I would suggest it has to begin with some portion of research starting from the premise the above are not going to be resolvable. Potentially more research into narrow AI and AI paradigms that are more predictable. However, at some point if you can build nearly AGI effective capabilities on top of a set of more narrow models, can it defend itself against an adversarial hostile AGI that will be built or result of accident of someone else.
CMDR_BunBun t1_jeb0jj5 wrote
Reply to GPT characters in games by YearZero
This has already been done
BJPark t1_jeb0hbh wrote
Reply to comment by natepriv22 in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
That's a fair point. And of course, it's entirely reasonable to look back at history when we had similar panics about technology replacing human jobs and come to the conclusion that it's all hot air.
But I think we must be cautious in saying that just because we had the boy who cried wolf last time, that there can never be a wolf in the future.
This time might indeed be different, if AI can literally replicate humans. And if we say "Humans will just find new jobs", then the question that immediately comes to mind is "Why can't AI do those jobs as well?"
Personally, I think the future is that AI will also become consumers. This way, corporations can create their own customers and destroy them at will. It's actually the perfect solution. We won't need finicky human beings to power the economy. AI can both consume as well as create goods, and we won't have to destroy capitalism to do it!
As to what will humans do, well... as long as I'm a shareholder in these corporations, I have nothing to worry about.
Cartossin t1_jeb0d6f wrote
Reply to comment by TheDuwus in Where do you place yourself on the curve? by Many_Consequence_337
Exactly! Have these people not read Life 3.0 by Max Tegmark???
scooby1st t1_jeb08r0 wrote
Reply to comment by _JellyFox_ in Where do you place yourself on the curve? by Many_Consequence_337
And is this a conceptually accurate curve? I reject the premise of the question altogether 😡
Bajous t1_jeb04n3 wrote
Where do you think the guy who posted about being the best coder in the world is?
MagnumTAreddit t1_jeb58pr wrote
Reply to Do politicians in your country already talk about AI? by ItsPepejo
It happens in America but it’s super niche and rarely covered by the press. I think it’s less a matter of politicians not talking about it than other issues being more urgent to voters, for better and for worse.