Copy of Jomo Featured Picture Template 23

Britain expands its AI Safety Institute to San Francisco, addressing regulatory shortcomings

LONDON — The British government is extending its testing facility for advanced artificial intelligence models to the United States. This move aims to bolster the UK’s reputation as a leading global force in addressing AI risks and to enhance cooperation with the U.S. as nations compete for dominance in AI leadership.

Britain expands AI safety institute to San Francisco amid scrutiny over  regulatory shortcomings – NBC New York
jomotoday.com
On Monday, the government announced the launch of a U.S. counterpart to its AI Safety Summit: a state-backed body focused on testing advanced AI systems to ensure their safety. This new entity, the AI Safety Institute, will open in San Francisco this summer.

The U.S. branch of the AI Safety Institute will recruit a technical team led by a research director. Currently, the London institute has a team of 30 and is chaired by Ian Hogarth, a prominent British tech entrepreneur and founder of the music concert discovery site Songkick.

U.K. Technology Minister Michelle Donelan stated that the U.S. rollout of the AI Safety Summit demonstrates British leadership in AI. She emphasized that this initiative is a pivotal moment for the U.K. to study AI risks and potential from a global perspective, strengthening the partnership with the U.S. and enabling other countries to leverage British expertise in AI safety.

The government highlighted that the expansion to the U.S. will allow the U.K. to access the tech talent in the Bay Area, engage with leading AI labs in London and San Francisco, and solidify relationships with the U.S. to advance AI safety for public benefit. San Francisco is also home to OpenAI, the Microsoft-backed company behind the AI chatbot ChatGPT.

The AI Safety Institute was initially established in November 2023 during the AI Safety Summit, held at England’s Bletchley Park, to foster international cooperation on AI safety. The U.S. expansion is announced just before the AI Seoul Summit in South Korea, which was proposed at the Bletchley Park summit and will take place this Tuesday and Wednesday.

Since its establishment, the AI Safety Institute has made progress in evaluating advanced AI models from leading industry players. The government noted that while several models successfully completed cybersecurity challenges, they struggled with more advanced tasks. Some models displayed Ph.D.-level knowledge in chemistry and biology but remained vulnerable to “jailbreaks,” where users manipulate them into producing inappropriate responses. Additionally, models were unable to perform complex, time-consuming tasks without human oversight.

The government did not disclose which AI models were tested but had previously secured agreements from OpenAI, DeepMind, and Anthropic to allow their AI models to be evaluated for safety research. This development occurs as Britain faces criticism for its lack of formal AI regulations, while the European Union is advancing with the AI Act, which is poised to become a global standard for AI legislation once fully approved by EU member states.

Read More: Reddit’s shares surge after-hours due to a data-sharing agreement with OpenAI

Disclaimer:
This content is AI-generated using IFTTT AI Content Creator. While we strive for accuracy, it’s a tool for rapid updates. We’re committed to filtering information, not reproducing or endorsing misinformation. – Jomotoday for more information visit privacy policy

More Reading

Post navigation

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *