OpenAI’s track record on AI safety stinks — bordering on “functioning as a de facto advocacy arm” rather than a genuine research lab

I really hope you’re enjoying what we do here! If you want to make sure you don’t miss anything, I recommend setting Windows Central as a preferred source in Google Search. There are good reasons to do so – it’s the best way to stay informed about all the latest Windows news, reviews, features, and everything else we cover.

Wow, the last couple of months have been pretty wild for OpenAI, if you ask me! It sounds like they’ve been under a lot of pressure from investors to become a for-profit company, or they risked losing funding and potentially being taken over by someone else. Apparently, things got so competitive with companies like Anthropic and Google that Sam Altman even called it a “code red” situation to try and make ChatGPT even better. It’s been a stressful time for them, that’s for sure!

OpenAI, the company behind ChatGPT, is still facing scrutiny. A recent report from WIRED indicates that at least two researchers have left, expressing worries that the company hasn’t been fully transparent with its published research. Specifically, they’re concerned OpenAI has been secretive and potentially misleading when the research shows downsides to the technology or its potential economic impact.

Tom Cunningham, who used to work as an economic researcher at OpenAI, recently left the company. What’s particularly noteworthy is that, in a farewell message to his colleagues, he stated he believed the team was moving away from real research and was instead acting more like a public relations department for OpenAI.

Despite concerns raised, Jason Kwon, OpenAI’s Chief Strategy Officer, addressed them in a company memo, emphasizing that OpenAI should be a responsible leader in AI. He stated the company needs to not only identify issues with the technology, but also actively work to create solutions.

I don’t believe we should avoid difficult topics. However, because we’re not just studying AI, but actively developing and releasing it into the world – essentially leading the way – we have a responsibility to own the consequences of our work.

OpenAI Chief Strategy Officer, Jason Kwon

OpenAI is rapidly growing, now valued at $500 billion, as it builds relationships with governments and businesses. Experts believe its AI technology could fundamentally change the way we work.

Recently, more people are warning that the current excitement around AI might be a bubble, similar to the dot-com boom and bust of the late 90s. Bill Gates, co-founder of Microsoft, believes many AI investments won’t pan out, stating, “A lot of these investments will ultimately fail.”

OpenAI has previously released research exploring how its technology might affect jobs. For example, a 2023 report, “GPTs Are GPTs,” identified professions most likely to be impacted by increasing AI automation.

People familiar with OpenAI’s internal discussions told WIRED, on the condition of staying anonymous, that the company is now less likely to share research showing potential negative economic effects of AI. Instead, they’re prioritizing the release of reports that emphasize the positive aspects.

OpenAI has faced scrutiny before regarding how it runs its business and develops AI. Last year, The Financial Times reported that the company appeared to prioritize launching new products over maintaining strong safety standards and a safety-focused culture.

Microsoft and OpenAI have formalized their partnership with a new agreement. This allows Microsoft to develop advanced AI, potentially even superintelligence, on its own or with other companies. Importantly, OpenAI can’t announce they’ve achieved AGI simply to become independent from Microsoft – an impartial group of experts must first confirm that the milestone has actually been reached.

Microsoft’s AI chief, Mustafa Suleyman, has stated the company will stop developing AI if it ever appears to be a threat to human existence. This reflects his commitment to creating AI that benefits people and works with them, rather than potentially replacing them.

Read More

2025-12-15 19:10