The UK’s Artificial Intelligence (AI) Safety Institute is set to expand on…

The UK’s Artificial Intelligence (AI) Safety Institute is set to expand internationally with a new site in the US.

On May 20, Michelle Donelan, the UK’s technology minister, announced that the institute would open its first overseas office in San Francisco in the summer.

The announcement said that the strategic choice of an office in San Francisco would allow the UK to “take advantage of the wealth of technical talent available in the Bay Area,” in addition to dealing with one of the largest artificial intelligence laboratories in the world located between London and San Francisco.

Additionally, it said the move would help it “solidify” relationships with key players in the US to push for global AI safety “in the public interest.”

The London branch of the AI ​​Safety Institute already has a team of 30 people and is on track to expand and gain more experience, especially in the area of ​​risk assessment of frontier AI models.

Donelan said the expansion represents the UK’s leadership and vision for AI safety at work.

“This is a pivotal moment in the UK’s ability to examine the risks and potential of AI from a global perspective, strengthening our partnership with the US and paving the way for other countries to benefit from our expertise as we continue to lead the world into the future.” “AI Safety.”

This follows the UK’s historic AI Safety Summit held in London in November 2023. The summit was the first of its kind to focus on AI safety on a global scale.

Related: Microsoft faces a multibillion-dollar fine in the European Union over Bing AI

The event featured leaders from around the world, including the US and China, and leading voices in AI including Microsoft President Brad Smith, OpenAI CEO Sam Altman, Google and DeepMind CEO Demis Hassabis and Elon Musk.

In this latest announcement, the UK said it will also release a selection of the institute’s latest results from its safety tests of five advanced AI models available to the public.

It anonymized the models and said the results provide a “snapshot” of the models’ capabilities rather than classifying them as “safe” or “unsafe.”

Part of the findings included that many models could complement cybersecurity challenges, although others struggled with more advanced models. Several models were found to have doctoral level knowledge in chemistry and biology.

It concluded that all models tested were “highly vulnerable” to basic jailbreaks and that the models tested were unable to complete “more complex and time-consuming tasks” without human supervision.

These evaluations will help contribute to experimental evaluation of the model’s capabilities, said Ian Hoggerth, head of the institute.

“AI safety is still a very young and emerging field. These results represent only a small part of the evaluation approach that AISI is developing.

magazine:“Sic AIs on each other” to prevent an AI apocalypse: David Brin, science fiction author

I am HAKAM web developer. From here you will get a lot of valuable information for free and a call for updates to WordPress and Blogger so that your sites are accepted in Google AdSense and also proper mastery of SEO and the best for the easiest.

Leave a Reply

Your email address will not be published. Required fields are marked *