Apparently, OpenAI makes its employees enter agreements that could penalize them if they raise concerns about the company to federal authorities. And concerns there are to be had.
While going down this rabbit hole, I discovered that just last month, a group of predominantly former OpenAI employees issued an
The group, which also included a handful of former
The letter went on to say that only current and former employees could hold such companies accountable to the public, especially given that "many of the risks we are concerned about are not yet regulated."
So what prompted the former OpenAI employees to draft such a letter, and exactly what it is that the
Most of the concerns of these former employees seem to do with the supposed risk of artificial intelligence. These risks include "the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction," per the letter.
One would think that this was just a bunch of people spouting conspiracy theories, but all of these risks have been acknowledged by companies OpenAI, Anthropic, and Google themselves, and even some governments,
Kinda makes you wonder whether the software engineer who was
Anyway, Vox reported in
After the report,
But all that besides, what may have slipped from peoples' radars are a
Back in May, Leike announced he had left the company over disagreements with the company over its core priorities.
"Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products," Leike tweeted.
Essentially, we have a group of OpenAI employees who think
Some of the safety issues at OpenAI were recently
It is in this backdrop that Reuters now
Meanwhile, OpenAI is
By all accounts, it would seem that OpenAI is prioritizing speed over safety, and if Strawberry truly is as great as media reports claim,
In Other News.. 📰
- WazirX Hacked for $230M, Largely in SHIB, as Elliptic Says North Korea Behind Attack — via
CoinDesk - TTT models might be the next frontier in generative AI — via
TechCrunch - Amazon Prime Day ‘major cause of injuries’ for workers, Senate finds — via
CNN - Samsung agrees to acquire British startup Oxford Semantic for AI — via
Reuters - TMeta won't offer future multimodal AI models in EU — via
Axios - Bye-bye bitcoin, hello AI: Texas miners leave crypto for next new wave — via
CNBC
And that's a wrap! Don't forget to share this newsletter with your family and friends! See y'all next week. PEACE! ☮️
—
*All rankings are current as of Monday. To see how the rankings have changed, please visit HackerNoon's
Tech, What the Heck!? is a once-weekly newsletter written by HackerNoon editors that combine HackerNoon's proprietary data with news-worthy tech stories from around the internet. Humorous and insightful, the newsletter recaps trending events that are shaping the world of tech. Subscribe