British financial institutions are set to receive access within days to Anthropic's Claude Mythos, a powerful new artificial intelligence model that the company itself has described as posing unprecedented cybersecurity risks — warning that it should not yet be released to the general public. Anthropic, the US-based AI safety company behind the Claude family of AI tools, has so far limited Mythos to a small group of primarily American firms, including Amazon, Apple and Microsoft. Pip White, Anthropic's head of UK, Ireland and northern Europe operations, confirmed that British banks would be brought into that group imminently, saying that engagement from UK chief executives had been "significant" in recent days.
The urgency stems from Mythos's demonstrated ability to identify security flaws at a scale that has alarmed governments and regulators worldwide. Anthropic has said the model found vulnerabilities in every major operating system and browser, and warned in a public blogpost that AI had reached "a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities." Banks and governments are being offered early access specifically so they can test and harden their own systems before any broader public release. The US Treasury has already encouraged major American banks to do the same.
The issue dominated discussions at this week's International Monetary Fund and World Bank spring meetings in Washington, where finance ministers, central bankers and regulators gathered. Canadian Finance Minister François-Philippe Champagne captured the prevailing mood, telling the BBC that unlike a physical threat such as the Strait of Hormuz — the critical Persian Gulf shipping lane — the risk posed by Mythos was an "unknown unknown" that demanded urgent attention and robust safeguards. European Central Bank president Christine Lagarde acknowledged that Anthropic had acted responsibly by flagging the dangers, but warned that no adequate governance framework yet exists to manage them. Bank of England governor Andrew Bailey, who also chairs the Financial Stability Board of global regulators, said the development was "a very serious challenge" and raised the difficult question of when precisely regulators should act — too early risks stifling beneficial innovation, too late risks losing control.
From the banking industry, Barclays chief executive CS Venkatakrishnan said the situation was "serious enough that people have to worry," stressing the need to understand and fix exposed vulnerabilities quickly. Investors are also watching closely: James Wise, a partner at Balderton Capital and chair of the UK government-backed Sovereign AI unit — a £500m venture fund investing in British AI companies — described Mythos as "the first of what will be many more powerful models" capable of exposing systemic weaknesses. He expressed hope that the same models uncovering vulnerabilities would ultimately help fix them. Adding to the sense of urgency, financial industry sources indicated that at least one other major US AI company may soon release a model of comparable power but without the same safety precautions that Anthropic has applied.