The entire OpenAI team focused on the existential risks of AI has either resigned or…

The entire OpenAI team that focused on the existential risks of AI either resigned or was absorbed into other research groups.

Days after Ilya Sutskever, chief scientist at OpenAI and one of the company’s founders, announced his departure, Jan Leike, a former DeepMind researcher who was the other co-leader of the OpenAI superteam, announced, to publish On X he resigned.

According to Leike, his departure from the company was due to his concerns about its priorities, which he believes are focused more on product development than the safety of artificial intelligence.

Source: Jan Lake

In a series of posts, Leike stated that OpenAI’s leadership was wrong in its choice of key priorities and must focus on safety and preparedness as artificial general intelligence (AGI) development moves forward.

AGI is a term that refers to a virtual artificial intelligence that can perform as well as or better than humans on a range of tasks.

After spending three years at OpenAI, Leike criticized the company for prioritizing developing flashy products over fostering a strong AI safety culture and processes. He stressed the urgent need to allocate resources, especially computing power, to support his team’s safety-critical research, which had been neglected.

“…I had been disagreeing with OpenAI’s leadership about the company’s core priorities for some time until we finally reached a breaking point. For the past few months, my team has been sailing against the wind…”

OpenAI formed a new research team in July last year to prepare for the emergence of highly intelligent artificial intelligence that can outsmart and outwit its creators. OpenAI’s chief scientist and co-founder, Ilya Sutskever, was named co-leader of this new team, which received 20% of OpenAI’s computational resources.

Related: Reddit shares jump after hours on OpenAI data-sharing deal

Following recent resignations, OpenAI has chosen to disband the “Superalignment” team and consolidate its functions into other research projects within the organization. This decision is said to be a result of ongoing internal restructuring, which began in response to the governance crisis in November 2023.

Sutskever was part of an effort that successfully led to OpenAI’s board of directors briefly firing Altman as CEO in November of last year before later reappointing him to the position after employee backlash.

According to the information, Sutskever informed employees that the Board’s decision to remove Sam Altman fulfills their responsibility to ensure that OpenAI develops artificial general intelligence that benefits all of humanity. As one of six board members, Sutskever emphasized the Board’s commitment to aligning OpenAI’s goals with the public good.

magazine: How to Get Better Cryptocurrency Predictions from ChatGPT Swipe the Pin Humane AI: AI Eye

I am HAKAM web developer. From here you will get a lot of valuable information for free and a call for updates to WordPress and Blogger so that your sites are accepted in Google AdSense and also proper mastery of SEO and the best for the easiest.

Leave a Reply

Your email address will not be published. Required fields are marked *