Recent comments in /f/MachineLearning

Beatboxamateur t1_jcdm84x wrote

Apple seems to be betting a lot on their upcoming XR projects, which will probably have a lot of AI integrated with the software, similar to Meta's vision. They're hugely hardware focused, so I don't think they'll ever be marketing some kind of LLM on it's own, it'll almost always be built in to support their hardware.

1

BrotherAmazing t1_jcdloe7 wrote

You can still replicate results in private under a non-disclosure agreement or verify/validate results without it getting published to the world though.

I like open research but research that happens in private still can be useful and is reality.

−4

rustlingdown t1_jcdjxlc wrote

Ethics teams are only useful if they are actively incorporated with and listened by engineering and business teams.

To put it another way: if you're making money regardless of ethics, or if you're building faster without ethics - it's not the fault of "ethics" if these ethical considerations are ignored.

"Move fast and break things" has been the motto of the Silicon Valley for decades. No reason for that to change when it comes to trampling ethical values (see: Cambridge Analytica and countless other examples).

In fact, even with these teams layed off, it's impossible to know whether or not they've been useful given that we don't even know how they're integrated within Microsoft/Meta/ClosedAI. (They've just been fired, so probably not well.)

IMO it's the same issue as climate change and gas/energy companies. There's greenwashing just as much as there's ethicswashing. Only when corporations realized that addressing climate change was more profitable did anyone change their ways (and they're still struggling to!). Same thing with ethics and AI.

34

dfreinc t1_jcdirwa wrote

they still have an Office of Responsible AI and i believe that's valuable but the counterpoint i've been told is that

>When studying software engineering, this is exactly what they tought us as best practice.

>If you want an unbiased assessment wether your goals were met, it's good advice to not task the same team which worked towards those goals. People become emotionally attached to what they do, and like being told they did a good job, and more reasons.

>I believe this idea generally applies to quality assurance.

by /u/Spziokles

3

minhrongcon2000 t1_jcdi9jh wrote

Firstly, since OpenAI has released such a good chatbot right now, there is no point of enforcing patent for google and meta chatbot since patent requires you to public your work for other parties to verify that your work didn't overlap with current patent. Secondly, it's too late for Google to do patent now since it is widely used now :D

1

BrotherAmazing t1_jcdhklt wrote

All of these companies publish some things, they keep other things trade secrets, patent other things, and so on. Each decision is a business decision.

This thread is baffling to me because so many people seem to have this idea that, at one time, AI/ML or any tech companies were completely “open” and published everything of any value. This is nowhere close to reality.

3

BrotherAmazing t1_jcdgtka wrote

I don’t understand what OP is worried or complaining about. Every business can choose whether they wish to publish or release IP or withhold it and keep it as a trade secret. That is a business decision.

You are allowed to “benefit from” information other companies publish so long as you don’t break any laws.

OP implies OpenAI is infringing on patents and Google or Meta should enforce their patents and make OpenAI pay royalties, cease and desist, or face legal consequences. What patents is OpenAI infringing on? I have an INCREDIBLY hard time believing Google or Meta wouldn’t go after someone who was infringing on their patents if they became aware of it.

−5

Eggy-Toast t1_jcdgpig wrote

GPT-4 would understand the point he’s making

“The point being made here is twofold:

The user is praising ChatGPT for its effectiveness in writing sales emails for difficult clients, highlighting how it has streamlined their workload by replacing the need for an additional staff member and allowing them to multitask.

The user is also critiquing the choice of words used by someone else ("humiliated" and "jailbroken") in the given context, suggesting that the person may not have a proper understanding of the situation.

The logical conclusion drawn from these points is that ChatGPT is a valuable tool that can significantly improve efficiency in handling tasks like sales emails, while also implying that it is important to use appropriate terminology and demonstrate a clear understanding of a situation when discussing or debating any issue.”

L+ratio+maidenless

7

1F9 t1_jcdfbje wrote

I am concerned that moving more stuff up into Python is a mistake. It limits support for other languages, like Rust, which speak to the C++ core. Also, executing Python is slower, so limits what can be done by the framework before being considered “too slow.”

Moving a bit to a high level language seems like a win, but when that inspires moving large parts of a big project to high-level languages, I’ve seen unfortunate results. It seems each piece in a high level language often imposes non-obvious costs on all the pieces.

This is nothing new. Way back in the day, Netscape gave up on Javagator, and Microsoft “reset” Windows longhorn to rip out all the c#. Years of work by large teams thrown away.

−12

professorlust t1_jcdeiux wrote

The argument from a research perspective is that scale isn’t likely the Holy Grail.

It’s undoubtedly important, yes.

BUT for a researcher, the quest is to determine how important scale truly is AND how to determine ways that help reduce dependence on scale.

10

PsyEclipse t1_jcd8tig wrote

I actually went the other way. My background is FEDGOV hurricane scientist who now works at a popular weather app after teaching myself the ML part. We do a lot of statistical modeling in meteorology already (mainly bias correction and downscaling), and I made the decision to move away from dynamics-based research to statistics-based applications since it's about to take over our field as well. I'd argue meteorology is one of the original big data problems, but I guess that's a topic for a different day.

Anyway, there are a bunch of meteorology companies that are using AI/ML for a variety of things. Since you said you were doing computer vision, identifying things in satellite imagery (Sentinel-2, LandSat, maybe even GOES) is very popular right now. It doesn't get you away from the core issues of tuning models, but maybe a different focus might work for you...?

I would just warn you that the pay in the field of meteorology sucks relative to BIG TECH.

7