Recent comments in /f/MachineLearning
BrotherAmazing t1_jcdmuqt wrote
Reply to comment by Jadien in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
That’s not true. If I am not infringing on any of your patents and you are clearly infringing on one or more of mine, cease and desist or lawsuit incoming and no “mutually assured destruction”.
kross00 t1_jcdmgl4 wrote
Reply to [P] ControlNetInpaint: No extra training and you can use 📝text +🌌image + 😷mask to generate new images. by mikonvergence
Do you plan on releasing an Automatic1111 plugin for this?
Beatboxamateur t1_jcdm84x wrote
Reply to comment by bert0ld0 in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
Apple seems to be betting a lot on their upcoming XR projects, which will probably have a lot of AI integrated with the software, similar to Meta's vision. They're hugely hardware focused, so I don't think they'll ever be marketing some kind of LLM on it's own, it'll almost always be built in to support their hardware.
BrotherAmazing t1_jcdloe7 wrote
Reply to comment by professorlust in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
You can still replicate results in private under a non-disclosure agreement or verify/validate results without it getting published to the world though.
I like open research but research that happens in private still can be useful and is reality.
rustlingdown t1_jcdjxlc wrote
Ethics teams are only useful if they are actively incorporated with and listened by engineering and business teams.
To put it another way: if you're making money regardless of ethics, or if you're building faster without ethics - it's not the fault of "ethics" if these ethical considerations are ignored.
"Move fast and break things" has been the motto of the Silicon Valley for decades. No reason for that to change when it comes to trampling ethical values (see: Cambridge Analytica and countless other examples).
In fact, even with these teams layed off, it's impossible to know whether or not they've been useful given that we don't even know how they're integrated within Microsoft/Meta/ClosedAI. (They've just been fired, so probably not well.)
IMO it's the same issue as climate change and gas/energy companies. There's greenwashing just as much as there's ethicswashing. Only when corporations realized that addressing climate change was more profitable did anyone change their ways (and they're still struggling to!). Same thing with ethics and AI.
elehman839 t1_jcdjbmg wrote
Reply to comment by wywywywy in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
Researchers and engineers seem to be moving from one organization to another pretty rapidly right now. Hopefully, that undermines efforts to keep technology proprietary.
dfreinc t1_jcdirwa wrote
they still have an Office of Responsible AI and i believe that's valuable but the counterpoint i've been told is that
>When studying software engineering, this is exactly what they tought us as best practice.
>If you want an unbiased assessment wether your goals were met, it's good advice to not task the same team which worked towards those goals. People become emotionally attached to what they do, and like being told they did a good job, and more reasons.
>I believe this idea generally applies to quality assurance.
by /u/Spziokles
minhrongcon2000 t1_jcdi9jh wrote
Reply to [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
Firstly, since OpenAI has released such a good chatbot right now, there is no point of enforcing patent for google and meta chatbot since patent requires you to public your work for other parties to verify that your work didn't overlap with current patent. Secondly, it's too late for Google to do patent now since it is widely used now :D
BrotherAmazing t1_jcdhklt wrote
Reply to comment by sweatierorc in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
All of these companies publish some things, they keep other things trade secrets, patent other things, and so on. Each decision is a business decision.
This thread is baffling to me because so many people seem to have this idea that, at one time, AI/ML or any tech companies were completely “open” and published everything of any value. This is nowhere close to reality.
BrotherAmazing t1_jcdh35r wrote
Reply to comment by wywywywy in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
These companies already have massive amounts of trade secrets they withhold. They all do and lawyers have.
BrotherAmazing t1_jcdgtka wrote
Reply to [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
I don’t understand what OP is worried or complaining about. Every business can choose whether they wish to publish or release IP or withhold it and keep it as a trade secret. That is a business decision.
You are allowed to “benefit from” information other companies publish so long as you don’t break any laws.
OP implies OpenAI is infringing on patents and Google or Meta should enforce their patents and make OpenAI pay royalties, cease and desist, or face legal consequences. What patents is OpenAI infringing on? I have an INCREDIBLY hard time believing Google or Meta wouldn’t go after someone who was infringing on their patents if they became aware of it.
Eggy-Toast t1_jcdgpig wrote
Reply to comment by suflaj in [Discussion] What happened to r/NaturalLanguageProcessing ? by MadNietzsche
GPT-4 would understand the point he’s making
“The point being made here is twofold:
The user is praising ChatGPT for its effectiveness in writing sales emails for difficult clients, highlighting how it has streamlined their workload by replacing the need for an additional staff member and allowing them to multitask.
The user is also critiquing the choice of words used by someone else ("humiliated" and "jailbroken") in the given context, suggesting that the person may not have a proper understanding of the situation.
The logical conclusion drawn from these points is that ChatGPT is a valuable tool that can significantly improve efficiency in handling tasks like sales emails, while also implying that it is important to use appropriate terminology and demonstrate a clear understanding of a situation when discussing or debating any issue.”
L+ratio+maidenless
1F9 t1_jcdfbje wrote
Reply to [N] PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever by [deleted]
I am concerned that moving more stuff up into Python is a mistake. It limits support for other languages, like Rust, which speak to the C++ core. Also, executing Python is slower, so limits what can be done by the framework before being considered “too slow.”
Moving a bit to a high level language seems like a win, but when that inspires moving large parts of a big project to high-level languages, I’ve seen unfortunate results. It seems each piece in a high level language often imposes non-obvious costs on all the pieces.
This is nothing new. Way back in the day, Netscape gave up on Javagator, and Microsoft “reset” Windows longhorn to rip out all the c#. Years of work by large teams thrown away.
VinnyVeritas t1_jcdelqr wrote
Reply to [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
Ultimately AI will become a big boys club, where big corporate will hold all the cards.
OpenAI just made the first leap towards that dystopian near future.
professorlust t1_jcdeiux wrote
Reply to comment by eposnix in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
The argument from a research perspective is that scale isn’t likely the Holy Grail.
It’s undoubtedly important, yes.
BUT for a researcher, the quest is to determine how important scale truly is AND how to determine ways that help reduce dependence on scale.
professorlust t1_jcddvx5 wrote
ReginaldIII t1_jcdbpqz wrote
Reply to [N] PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever by [deleted]
"GPT summary" jesus wept. As if reddit posts weren't already low effort enough.
Neat news about Pytorch.
[deleted] OP t1_jcdbcc8 wrote
MrPineApples420 t1_jcda2ix wrote
Reply to [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
Can we even call it OpenAI anymore ? That’s literally false advertising…
[deleted] OP t1_jcd9gt8 wrote
iJeff t1_jcd94db wrote
Reply to comment by Kenyth in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
Do you happen to have any links to follow it?
GFrings t1_jcd941i wrote
Manage people who still do machine learning. Motivating scientists isn't unlike the constant iterative grind of matching the right reward function to the right model.
PsyEclipse t1_jcd8tig wrote
I actually went the other way. My background is FEDGOV hurricane scientist who now works at a popular weather app after teaching myself the ML part. We do a lot of statistical modeling in meteorology already (mainly bias correction and downscaling), and I made the decision to move away from dynamics-based research to statistics-based applications since it's about to take over our field as well. I'd argue meteorology is one of the original big data problems, but I guess that's a topic for a different day.
Anyway, there are a bunch of meteorology companies that are using AI/ML for a variety of things. Since you said you were doing computer vision, identifying things in satellite imagery (Sentinel-2, LandSat, maybe even GOES) is very popular right now. It doesn't get you away from the core issues of tuning models, but maybe a different focus might work for you...?
I would just warn you that the pay in the field of meteorology sucks relative to BIG TECH.
Smallpaul t1_jcdn6lf wrote
Reply to comment by bert0ld0 in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
What a wasted lead with Siri.
That said, apple has an even higher reputation around polish and accuracy than Google does. They would need something different than ChatGPT. A lot more curated.