Google and Anthropic Develop AI Safety Standards

    Published on:

    As discussions about the responsible development and use of AI continue to take center stage at global conferences, Google is collaborating with US startup Anthropic on a number of initiatives, including AI Guardrails.

    The collaboration focuses on AI safety standards, leverages Google’s TPU v5e accelerators for AI inference, and has a shared commitment to the highest standards of AI security.

    Protection from hackers

    according to metaverse post, As part of the deal, Anthropic will use Google’s latest Cloud TPU v5e chips to power a large language model known as Claude. This is expected to provide a high-performance and affordable solution for medium- and large-scale AI training and inference activities.

    a bloomberg The report says Anthropic has also agreed to spend more than $3 billion on Google Cloud services over the next four years, in a deal that includes only TPUs and other services.

    According to Google, Anthropic will use Chronic Security Operations to protect its cloud environment from hackers. It’s a Google Cloud service “designed to ease the task of detecting cyberattacks.”

    silicon valley report Anthropic has been using Google Cloud since its launch in 2021 and has also adopted Security Command Center. This is a tool that helps enterprises remediate vulnerabilities and insecure configuration settings in their cloud environments.

    “Anthropic and Google Cloud share the same values ​​when it comes to AI development. AI development must be done in bold and responsible ways,” said Google Cloud CEO Thomas Kurian.

    “Building on years of collaboration, our expanded partnership with Anthropic allows us to safely and reliably bring AI to more people, while also enabling the most innovative and fastest-growing AI startups to “It provides another example of what we’re building on Google Cloud,” he added.

    The partnership builds on the companies’ long-standing relationship, with Anthropic previously relying on Google services such as Google Kubernetes Engine, AlloyDB, and BigQuery to support its AI research and operations needs.

    Also read: Metaverse gathering marks reopening of Coventry University’s AME


    The announcement comes after both Google and Anthropic attended the inaugural AI Safety Summit (AISS), hosted by the UK government at Bletchley Park. Senior executives from Google and Anthropic attended the summit along with other industry leaders, government officials, civil society, and experts to address concerns about advanced AI and strategy. responsible development.

    According to Google Cloud, this builds on Google and Anthropic’s respective collaborations “with the Frontier Model Forum, an effort to help develop better measures for AI safety.”

    This further highlights Google and Anthropic’s commitment to fostering dialogue around AI safety and aligning it with societal values.

    “Our long-standing partnership with Google is built on a shared commitment to developing AI responsibly and deploying it in ways that benefit society,” said Anthropic CEO Dario Amodei.

    “We look forward to continuing to work together to make steerable, reliable, and interpretable AI systems available to more companies around the world,” said Amodei, co-founder of Anthropic. I’m looking forward to it,” he added.

    Both tech companies are also working with a nonprofit called MLCommons, which works to create trusted third-party data about AI systems.


    Leave a Reply

    Please enter your comment!
    Please enter your name here