Recent comments in /f/technology

Jorycle t1_jed962j wrote

>"Work in the office" is just the manager's way of saying they don't understand what you do and have no way to measure your output unless they can constantly look over at your desk and make sure you're not happy.

More like, it's the manager's way of saying "my manager is starting to notice my job serves no purpose when I can't roam the halls like a freak, badgering employees and picking up buzzwords to drop into conversations, and his manager is starting to notice, too."

11

autotldr t1_jed7k6m wrote

This is the best tl;dr I could make, original reduced by 54%. (I'm a bot)


> The Information's report also contains the potentially staggering thirdhand allegation that Google stooped so low as to train Bard using data from OpenAI's ChatGPT, scraped from a website called ShareGPT. A former Google AI researcher reportedly spoke out against using that data, according to the publication.

> According to The Information's reporting, a Google AI engineer named Jacob Devlin left Google to immediately join its rival OpenAI after attempting to warn Google not to use that ChatGPT data because it would violate OpenAI's terms of service, and that its answers would look too similar.

> Update March 30th, 2:02PM ET: Google would not answer a follow-up question about whether it had previously used ChatGPT data form Bard, only that Bard "Isn't trained on data from ChatGPT or ShareGPT.".


Extended Summary | FAQ | Feedback | Top keywords: Google^#1 data^#2 Bard^#3 ChatGPT^#4 train^#5

1

blueSGL t1_jed7gnq wrote

How has this narrative sprung up so quickly and spread so widely.

https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence

https://futureoflife.org/open-letter/ai-open-letter/

Back in 2015 the same org drafted an open letter and announced potential issues with AI's and that was years before any sort of commercialization effort.

There are alignment researchers who have signed the letter, both times.

Current models cannot be controlled or explained in fine grain enough detail to control (the problem is being worked on by people like Neel Nanda and Chris Olah but it's still very early stages and they need more time and people working on the problem)

The current 'safety' measures are bashing at a near infinite whack-a-mole board whenever it outputs something deemed wrong and it is far from 'safe'

21

uacabaca t1_jed74jg wrote

The vast majority of big tech are severely under-staffed, with a lot of activities put on hold because of lack of personel, and engineers working well above 8 hrs per day, just to make things move. So it's not "right sizing", it's "firing" to meet the quarterly financial goals that look good for their stock value.

9

ElysiumSprouts t1_jed62og wrote

What I meant is "right-sizing" The big tech companies over-hired in order to starve smaller companies of the skilled workers they needed. They brought on employees who simply were not needed and sat un-utilized to monopolize the work force. Reading the news, people got the impression that these big tech companies needed mass layoffs to downsize into effectiveness, but that's not entirely correct.

But sure, they fired workers and it harmed real people.

6