Recent comments in /f/MachineLearning
schwah t1_jbwpmcd wrote
Reply to comment by Nukemouse in [N] Man beats machine at Go in human victory over AI : « It shows once again we’ve been far too hasty to ascribe superhuman levels of intelligence to machines. » by fchung
There are ~10^170 valid board states for Go, and roughly 10^80 atoms in the observable universe. So even with a universe sized computer, you still wouldn't come close to having the compute power for that.
AlphaGo uses neural nets to estimate the utility of board states and a depth limited search to find the best move.
SeucheAchat9115 t1_jbwoc1c wrote
Reply to comment by marboka in [D] Unsupervised Learning — have there been any big advances recently? by onebigcat
SSL is the synonym for Semi-Supervised Learning. What you refer here is Self-Supervised Learning, which is related to unsupervised learning
ghostfuckbuddy t1_jbwobij wrote
Reply to comment by shahaff32 in [D] Is Pytorch Lightning + Wandb a good combination for research? by gokulPRO
That just sounds like a bug. It might take a lot less effort to report it for patching than rewrite all your own code.
duboispourlhiver t1_jbwmn0u wrote
Reply to comment by Curious_Tiger_9527 in [N] Man beats machine at Go in human victory over AI : « It shows once again we’ve been far too hasty to ascribe superhuman levels of intelligence to machines. » by fchung
The computer doesn't compute all the moves and doesn't know the exact mathematically best move. It uses digital neurons to infer rules from a huge number of games and find very good moves. I call this intelligence (artificial intelligence)
KingsmanVince t1_jbwmi1h wrote
Reply to comment by GraydientAI in [N] Man beats machine at Go in human victory over AI : « It shows once again we’ve been far too hasty to ascribe superhuman levels of intelligence to machines. » by fchung
Unplug the machine!
duboispourlhiver t1_jbwmh2g wrote
Reply to comment by currentscurrents in [N] Man beats machine at Go in human victory over AI : « It shows once again we’ve been far too hasty to ascribe superhuman levels of intelligence to machines. » by fchung
We are often using neural networks whose training is finished. The weights are fixed for this attack to work. This is obvious, but I would like to underline the fact that biological neural networks are never fixed.
science-raven OP t1_jbwl03o wrote
Reply to comment by MrTacobeans in [D] Development challenges of an autonomous gardening robot using object detection and mapping. by science-raven
Fixed variable cost analysis is crucial. 15k is very high. If you put 10 skilled workers on it for a year, plus development labs, it would cost about $1.2 million, including outsourcing to specialist engineers to refine the CAD files.
At high volumes, like 4000 units, that is divided to $300 RnD per unit. Obviously, it would benefit from a 2-3 million dev budget though.
The bill of materials is 3000, The metal welding is $500 and the assembly is another 500, so an open source kit would be less than 4000 dollars, and a fully built kit would also be 5000.
Husqvarna and roomba companies sell by market price, not the production price, so they can markup a high value, and they use custom circuit boards, custom plastic moulds including big thermoplastic pieces.
Curious_Tiger_9527 t1_jbwizur wrote
Reply to [N] Man beats machine at Go in human victory over AI : « It shows once again we’ve been far too hasty to ascribe superhuman levels of intelligence to machines. » by fchung
Is it really intelligence or just computer knowing all best possible moves
currentscurrents t1_jbwgjte wrote
Reply to comment by NotARedditUser3 in [N] Man beats machine at Go in human victory over AI : « It shows once again we’ve been far too hasty to ascribe superhuman levels of intelligence to machines. » by fchung
Nobody actually has a good solution to adversarial attacks yet.
The problem is not just this specific strategy. It's that, if you can give arbitrary inputs and outputs to a neural network, you can run an optimization process against it to find minimally-disruptive inputs that will make it fail. You can fool an image classifier by imperceptibly changing the image in just the right ways.
It's possible this is just a fundamental vulnerability of neural networks. Maybe the brain is vulnerable to this too, but it's locked inside your skull so it's hard to run an optimizer against it. Nobody knows, more research is needed.
onkus t1_jbwftny wrote
Reply to comment by Simusid in [Discussion] Compare OpenAI and SentenceTransformer Sentence Embeddings by Simusid
Doesn’t this also make it essentially impossible to compare the two figures you’ve shown?
currentscurrents t1_jbwfbdd wrote
Reply to [N] Man beats machine at Go in human victory over AI : « It shows once again we’ve been far too hasty to ascribe superhuman levels of intelligence to machines. » by fchung
TL;DR they trained an adversarial attack against AlphaGo. They used an optimizer to find scenarios where the network performed poorly. Then a human was able to replicate these scenarios in a real game against the AI.
The headline is kinda BS imo; it's a stretch to say it was beat by a human since they were just following the instructions from the optimizer. But adversarial attacks are a serious threat to deploying neural networks for anything important, we really do need to find a way to beat them.
NotARedditUser3 t1_jbwf0ja wrote
Reply to [N] Man beats machine at Go in human victory over AI : « It shows once again we’ve been far too hasty to ascribe superhuman levels of intelligence to machines. » by fchung
The difference is, they'll be able to easily train the model forward a slight bit to deal with this. Or add a few lines of code for it. Easily defeated issue.
The human beat it this time.... After 7 years.
But, after this... Its not like the humans improve. That vulnerability gets stamped out and that's it
[deleted] t1_jbwf01f wrote
Additional_Counter19 t1_jbweami wrote
Reply to comment by Nukemouse in [N] Man beats machine at Go in human victory over AI : « It shows once again we’ve been far too hasty to ascribe superhuman levels of intelligence to machines. » by fchung
Not for Go, there are too many states. They used machine learning and self-play to prune the number of states evaluated.
GraydientAI t1_jbweagi wrote
Reply to [N] Man beats machine at Go in human victory over AI : « It shows once again we’ve been far too hasty to ascribe superhuman levels of intelligence to machines. » by fchung
The difference is, the human gets tired and the AI can play 10,000,000 games simultaneously nonstop
Humans have zero chance haha
Nukemouse t1_jbwd60k wrote
Reply to [N] Man beats machine at Go in human victory over AI : « It shows once again we’ve been far too hasty to ascribe superhuman levels of intelligence to machines. » by fchung
I thought there was a computer that could just compute all possible go board states? Was that not the case?
huehue9812 t1_jbw7qjq wrote
Reply to comment by onebigcat in [D] Unsupervised Learning — have there been any big advances recently? by onebigcat
SSL doesn't require human labels, thus it is unsupervised learning
phys_user t1_jbw7i59 wrote
Reply to comment by rshah4 in [Discussion] Compare OpenAI and SentenceTransformer Sentence Embeddings by Simusid
Looks like text-embedding-ada-002 is already on the MTEB leaderboard! It comes in at #4 overall, and has the highest performance for clustering.
You might also want to look into SentEval, which can help you test the embedding performance on a variety of tasks: https://github.com/facebookresearch/SentEval
LoaderD t1_jbw3640 wrote
Reply to [P] vanilla-llama an hackable plain-pytorch implementation of LLaMA that can be run on any system (if you have enough resources) by poppear
> In reality you can easily fit the 65B version in 2 A100 with 100G of VRAM.
Ughhh are you telling me I have to SSH into my DGX 100 instead of just using my local machine with 1 A100? (Satire I am a broke student)
Appreciate the implementation and transparency. I don't think many people realize how big a 65B parameter model is since there's no associated cost with downloading them.
lppier2 t1_jbvwweu wrote
Reply to comment by Simusid in [Discussion] Compare OpenAI and SentenceTransformer Sentence Embeddings by Simusid
Thanks, to be really convinced though I would want to see the real world examples. Like a sample of where OpenAI did well, and where sentence transformers did well. Frankly if it doesn’t out perform sentence transformers I would be a bit disappointed given the larger size and all.
megacewl t1_jbvuksj wrote
Reply to comment by PuzzledWhereas991 in [P] vanilla-llama an hackable plain-pytorch implementation of LLaMA that can be run on any system (if you have enough resources) by poppear
Not sure about vanilla-llama but at the moment you can run LLaMA-13B at 4bit with >10GB of VRAM, so your 3080ti can run it.
To run 30B at 4bit, you need at least 20GB of VRAM. If your motherboard supports SLI, you can use nvlink to share the VRAM between your two GPUs and have a collective 20GB, which would let you run the 30B model provided you have enough system RAM.
Not sure if I can post the link to the tutorial here but Google "rentry Llama v2" and click the "LLaMA Int8 4bit ChatBot Guide v2" result for the most up-to-date tutorial to run it.
Simusid OP t1_jbvrbnu wrote
Reply to comment by lppier2 in [Discussion] Compare OpenAI and SentenceTransformer Sentence Embeddings by Simusid
Well that was pretty much the reason I did this test. And right now I'm leaning toward SentenceTransformers.
Affectionate_Shine55 t1_jbvqc9r wrote
model_ar.fit().predict(test)
Usually people do
res=model_ar.fit()
res.summary()
preds = res.predict(test)
serge_cell t1_jbwt0s9 wrote
Reply to comment by currentscurrents in [N] Man beats machine at Go in human victory over AI : « It shows once again we’ve been far too hasty to ascribe superhuman levels of intelligence to machines. » by fchung
It's a question of training. AlphaGo was not trained agains adversarial attacks. If it was the whole family of attacks wouldn't work, and new adversarial traning would be order of magnitude more difficult. It's a shield and sword again.