Recent comments in /f/MachineLearning
chef1957 t1_jcbzcfe wrote
Reply to comment by I_will_delete_myself in [D] To those of you who quit machine learning, what do you do now? by nopainnogain5
I agree with this. I used to do Machine Learning but shifted to working at an open-source company (Argilla), to be more involved with data quality instead of overfitting and building models without enough data.
[deleted] OP t1_jcbxfjx wrote
MysteryInc152 t1_jcbwooc wrote
Reply to comment by VelveteenAmbush in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
I agree there's a limit to how much they can withhold without releasing anything at all.
VelveteenAmbush t1_jcbw6mx wrote
Reply to comment by MysteryInc152 in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
DeepMind's leaders would love to hoard their secrets. The reason they don't is that it would make them a dead end for the careers of their research scientists -- because aside from the occasional public spectacle (AlphaGo vs. Lee Sedol) nothing would ever see the light of day. If they stopped publishing, they'd hemorrhage talent and die.
OpenAI doesn't have this dilemma because they actually commercialize their cutting-edge research. Commercializing its research makes its capabilities apparent to everyone, and being involved in its creation advances your career even without a paper on Arxiv.
omniron t1_jcbvup9 wrote
Reply to [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
All research gets used for productive entrepreneurial purposes. OpenAI is just kind of sad that they started with the mission of being open literally in their name, and now are going the opposite direction.
Google will eat their lunch though. Google has the Worlds largest collection of video and that’s the final frontier of large transformer network Ai.
VelveteenAmbush t1_jcbv79q wrote
Reply to comment by ComprehensiveBoss815 in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
The fact that they make their stuff available commercially via API is enough to make them 100x more "open" than the big tech companies.
crt09 t1_jcbv608 wrote
Reply to comment by Nhabls in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
> Alpaca couldn't be commercial because openai thinks it can forbid usage of outputs from their model to train competing models.
I dont think they claimed this anywhere? It seems that the only reason for Alpaca not releasing weights is Meta's policy for releasing Llama weights.
https://crfm.stanford.edu/2023/03/13/alpaca.html
> We have reached out to Meta to obtain guidance on releasing the Alpaca model weights, both for the 7B Alpaca and for fine-tuned versions of the larger LLaMA models.
Plus they already released the data they got from the GPT API, so anyone who has Llama 7B; an ability to implement the finetuning code in Alpaca; and 100 bucks can replicate it.
(EDIT: they released the code. now all you need is a willingness to torrent Llama 7B and 100 bucks)
VelveteenAmbush t1_jcbv0rs wrote
Reply to comment by Nhabls in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
GPT-4 is an actual commercial product though. AlphaGo was just a research project. No sane company is going to treat the proprietary technological innovations at the core of their commercial strategy as an intellectual commons. It's like asking them to give away the keys to the kingdom.
VelveteenAmbush t1_jcbu8nr wrote
Reply to comment by ScientiaEtVeritas in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
> While they also potentially don't release every model (see Google's PaLM, LaMDA) or only with non-commercial licenses after request (see Meta's OPT, LLaMA), they are at least very transparent when it comes to ideas, architectures, trainings, and so on.
They do this because they don't ship. If you're a research scientist or ML research engineer, publication is the only way to advance your career at a company like that. Nothing else would ever see the light of day. It's basically a better funded version of academia, because it doesn't seem to be set up to actually create and ship products.
Whereas if you can say "worked at OpenAI from 2018-2023, team of 5 researchers that built GPT-4 architecture" or whatever, that speaks for itself. The products you release and the role you had on the teams that built them are enough to build a resume -- and probably a more valuable resume at that.
Chuyito t1_jcbu40y wrote
Reply to [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
1, We are about to see a new push for a "robots.txt" equivalent for training data. E.g If yelp had a "datarules.txt file indicating no training on its comments for private use. Idea being that you could specify a license which allows training on your data for open source, but not for profit. Benefit for yelp is similar to the original Netflix training data set we all used at some point.
2, Its going to create a massive push for open frameworks. I can see nvda going down the path of "Appliances" similar to what IBM and many tech companies did for servers with pre-installed software. Many of those were open-source software, configured and ready to use/tune to your app. If you want to adjust the weight on certain bias filters, but not write the model from scratch.. Having an in house instance of your "assistant" will be favorable to many (E.g. if you are doing research on bioFuels, chatGpt will sensor way too much in trying to push "green", and lose track of research in favor of policy.)
Alimbiquated t1_jcbspbs wrote
Reply to comment by harharveryfunny in Modern language models refute Chomsky’s approach to language [R] by No_Draft4778
This kind of model needs vastly more input data than the human brain does to learn. It doesn't make sense to compare the two.
For example, Chat GPT is trained on 570 GB of data comprising 300 billion words.
https://analyticsindiamag.com/behind-chatgpts-wisdom-300-bn-words-570-gb-data/
If a baby heard one word a second, it would take nearly 10,000 years to learn the way Chat GPT did. But babies only need a few years and hear words at a much lower average rate.
So these models don't undermine the claim of innateness at all.
TooManyLangs t1_jcbsew4 wrote
Reply to [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
I imagine people ditching closedAI and Microsoft in a few months and start using alternatives instead (Google, open source, others). I don't use Bing, or BingGPT and I still use a chatbot everyday, so...
[deleted] OP t1_jcbqkhu wrote
RareMajority t1_jcbqi56 wrote
Reply to comment by ComprehensiveBoss815 in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
I'm not referring to OpenAI here. Meta released the weights to Llama and now anyone can build an AI based on that model for any purpose and without any attempt at alignment. Maybe there's middle ground between the two approaches.
jrejectj t1_jcbqgsn wrote
Reply to [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
im not in this field, but openai will become windows which largely adopt by normal user, and other become like linux, in term of operating software?
satireplusplus t1_jcbq2ik wrote
Reply to comment by OptimizedGarbage in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
> most notably dropout.
Probably unenforable and math shouldn't be patentable. Might as well try to patent matrix multiplications (I'm sure someone tried). Also dropout isn't even complex math. It's an elementwise multiplication with randomized 1's and 0's, thats all it is.
ComprehensiveBoss815 t1_jcbpv2t wrote
Reply to comment by gwern in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
Whataboutism in full effect.
ComprehensiveBoss815 t1_jcbpqhb wrote
Reply to comment by Eaklony in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
Then OpenAI should change their name to CapitalismAI and let a open source team of volunteers use the domain and project name.
killver t1_jcbpq7c wrote
Reply to [D] Is there an expectation that epochs/learning rates should be kept the same between benchmark experiments? by TheWittyScreenName
You actually rather found an issue in many research papers, that they do unfair comparisons on different methods based on un-tuned hyperparameters. If you run an EfficientNet vs. a VIT model on the same learning rate, you will get vastly different results.
ComprehensiveBoss815 t1_jcbpi7w wrote
Reply to comment by EnjoyableGamer in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
I read the paper on AlphaGo and I felt it had a enough technical detail for me to reproduce.
satireplusplus t1_jcbpgio wrote
Reply to comment by ScientiaEtVeritas in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
They should rename themselves to ClosedAI. Would be a better name for what the company is doing now.
amhotw t1_jcbpd9z wrote
Reply to comment by I_will_delete_myself in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
I understand that. I am pointing out the fact that they started on different paths. One of them was actually matching its name with what it was doing; the other was a contradiction from the beginning.
Edit: Wow, people either can't read or don't read enough history.
Uptown-Dog t1_jcboymu wrote
Reply to [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
I think that patents over IP relating to software or math (and several other fields) are evil, evil, evil. If we're using them to do anything we're doing things wrong.
Quazar_omega t1_jcbzyzy wrote
Reply to comment by anomhali in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
Saved by a comma