GeoTech Radar - Fusion of Geopolitics & Technology
Home Subscribe
The Hidden Chip Superpower. The Hormuz Crypto Tollbooth. Quantum’s Q-Day Moves Closer.

The Hidden Chip Superpower. The Hormuz Crypto Tollbooth. Quantum’s Q-Day Moves Closer.

IN THIS ISSUE:

CEO'S PERSPECTIVE
On the Radar
Under the Radar
Cambrian Partner By Invitation

CEO's Perspective

Strategic outlook from Cambrian leadership

Olaf Groth

When we stop testing frameworks, we risk the emergence of a false sense of confidence. It happens so organically and naturally, too. The framework generates success, the most effective silencer of doubt, and we grow more assured in what we’re doing. Our confidence accumulates — until something comes along to shatter it entirely.

When the Framework Becomes the Risk

The stories in this issue share a structure that took me a while to name. In each one, a framework that had worked reliably for years — a migration timeline, a legal architecture, a regulatory classification — did not fail because it was poorly designed. It simply had worked well enough for long enough that no one continued to test it. Success had done what failure never could.

That is a different kind of vulnerability than most risk functions are built to catch. You can audit a process. You can stress-test a model. It is much harder to identify the moment at which institutional confidence in a framework has quietly outpaced the evidence supporting it. By the time that gap becomes visible, it has usually already been exploited — by a state, a technology, or a market dynamic that the framework was never designed to see. And then we make bad decisions based on it.

What This Asks of You

I am not arguing that frameworks are the problem or that rigor is a liability. I spent two decades helping leaders build better ones. The problem is the absence of a built-in mechanism for questioning the framework, particularly after it has worked long enough that questioning it feels unnecessary. We have to force what’s called “epistemic openness” into our decision frameworks — the kind of self-questioning of mental and digital models that’s uncomfortable and doesn’t come naturally.

The leaders successfully navigating this moment treat their risk frameworks as hypotheses, not conclusions. They track the assumptions on which the framework depends, and then explicitly and separately track the outputs the framework produces. When something shifts in the underlying science, the legal environment, or the competitive landscape, they notice because they are watching both the assumptions and the outputs.

Building that habit is not a technology problem. It is a governance problem. It requires someone in the room whose job is to ask whether the framework still holds, and whose career does not depend on the answer being yes.

If that person doesn’t have a seat at your table, this issue is a good reason to find them one.

Olaf

On the Radar

The signals affecting the GeoTech landscape this week

The Hidden Chip Superpower

Amazon has quietly built a $20 billion-a-year custom chip business that would be worth $50 billion as a standalone company. Trainium2 is sold out. Trainium3 is nearly fully subscribed. The company hinted it could start selling chip racks externally. But its two largest AI chip customers are companies in which it invested $58 billion, raising hard questions about whether this silicon empire can stand independently of the financial relationships underwriting it.

BRIEFING: Andy Jassy’s April 9 shareholder letter contained a disclosure that should command more attention than it received. Amazon’s custom silicon portfolio, spanning the Graviton CPU, Trainium AI accelerator, and Nitro networking chip, now generates more than $20 billion in annualized revenue and is growing at triple-digit rates. If Amazon sold its chips on the open market like Nvidia, Jassy said, the business would command roughly $50 billion in annual revenue. That would make it larger than AMD’s entire data center segment.

Trainium2 is completely sold out, with 1.4 million chips deployed. Trainium3, which began shipping in early 2026 with 30-40% better price-performance than comparable GPU alternatives, is nearly fully subscribed. Uber is among the companies that moved workloads onto it. About 18 months before its release, much of Trainium4, which features interoperability with Nvidia’s NVLink Fusion interconnect, is already reserved. Two large AWS customers asked to purchase all available Graviton capacity for 2026. Amazon declined.

Jassy hinted in one sentence that Amazon could begin selling chip racks to third parties. Doing so would follow a familiar playbook, given that AWS itself started as internal infrastructure before becoming the world’s largest cloud provider. But the circular demand question is unavoidable. Amazon’s two largest Trainium customers, OpenAI (which committed over $100 billion to AWS) and Anthropic (with which Amazon has a $58 billion combined investment relationship), are companies Amazon bankrolled.

SO WHAT

For Executives: Amazon’s $200 billion capex plan for 2026 is the largest in tech history, and the company says a substantial portion is already backed by customer commitments. The strategic implication is that the AI compute stack is vertically integrating at speed. If Amazon begins selling Trainium externally, it enters direct competition with Nvidia while offering something Nvidia cannot — the chip bundled with the cloud, the networking, and the customer relationship. Enterprises currently dependent on Nvidia should begin evaluating Trainium workload interoperability and compatibility now. Amazon will not replace Nvidia overnight, but having a credible second source changes negotiating dynamics with every chip supplier.

For Policy Makers: Amazon’s chip business is being built entirely through its Israeli subsidiary Annapurna Labs, acquired in 2015 for $350 million. The geopolitical implication is that a meaningful and growing share of U.S. AI infrastructure is designed in Israel and fabricated at TSMC in Taiwan. If either node is disrupted, Amazon’s ability to supply AI compute degrades. The concentration of advanced chip design in two geographically vulnerable locations deserves the same supply chain scrutiny that policymakers apply to Nvidia’s dependence on TSMC. Separately, the circular investment structure in which Amazon funds OpenAI and Anthropic (committing $58 billion combined) and those same companies then become Amazon’s largest chip customers warrants antitrust attention. The FTC and DOJ should scrutinize whether hyperscaler cross-investment arrangements create anticompetitive lock-in effects that foreclose alternative compute providers from competing for frontier AI workloads.

For Investors: Amazon’s chip unit alone would qualify as one of the 15 largest semiconductor companies in the world. Yet this business has no separate reporting line. Investors pricing AWS as a cloud company are undervaluing a semiconductor business embedded inside it. More fundamentally, the chip business changes Amazon’s risk profile. It introduces semiconductor cyclicality, geopolitical fab-concentration risk, and defense-adjacent export control exposure to what was previously a cloud services beta. Portfolios holding Amazon as a cloud play should reassess whether they are now implicitly taking semiconductor and geopolitical risk they did not price in. The key metric to watch is whether a non-invested customer commits publicly to Trainium at scale. That would be the first independent demand signal and the trigger for a revaluation.


The Hormuz Crypto Tollbooth

Iran is attempting to permanently change the legal status of the world’s most important energy chokepoint by demanding per-barrel tolls paid in bitcoin or yuan, inspecting every ship, and limiting transit to a fraction of prewar levels. If the precedent holds, it could set a precedent for other critical waterways worldwide.

Briefing: The two-week ceasefire between the United States and Iran, announced April 8, was supposed to reopen the Strait of Hormuz. It has not. The 21-hour Islamabad peace talks collapsed on April 12 without a deal, with the nuclear program and Hormuz toll rights as the main sticking points. Vice President Vance, who led the U.S. delegation alongside Steve Witkoff and Jared Kushner, said Iran chose not to accept U.S. terms. Iran’s foreign minister countered that the U.S. engaged in “maximalism and shifting goalposts”. Trump responded by ordering a naval blockade of Iranian ports effective April 14, with CENTCOM clarifying that non-Iranian-origin vessels may still transit. A second round of talks may resume before the ceasefire expires on April 21. Throughout this period, the strait has remained effectively closed to commercial traffic, with only a handful of vessels transiting compared with the prewar average of over 120 per day.

The toll demand is the most consequential development. Hamid Hosseini, a spokesperson for Iran’s Oil, Gas and Petrochemical Products Exporters’ Union, told the Financial Times that Iran will charge $1 per barrel of oil on board, payable in bitcoin. The payment mechanism is designed to circumvent sanctions. Ships would email Iranian authorities with their cargo manifest, and, once Iran completes its assessment, vessels are given seconds to pay in cryptocurrency to avoid traceability and confiscation. Hosseini added that Iran is not in a rush. Approximately 230 loaded oil tankers remain trapped inside the Persian Gulf.

The international response has been uniformly hostile to Iran’s toll demands. Sultan Ahmed Al Jaber, CEO of Abu Dhabi National Oil Company, stated that the strait is not open and called Iran’s conditions coercion. The UN’s International Maritime Organization said tolls would set a dangerous precedent. Italian PM Meloni, British PM Starmer, and UK Foreign Secretary Cooper all demanded full, unconditional reopening. Trump initially floated a U.S.-Iran joint venture to collect tolls before reversing himself and ordering the naval blockade. Iran’s supreme leader Mojtaba Khamenei declared Iran the “definite victor” of the war and said Iran would bring strait management into a “new stage.” Pakistan is working to facilitate a second round of talks before the April 21 ceasefire expiration.

So What

For Executives: The payment mechanism is the story within the story. Bloomberg reported that the IRGC has been accepting tolls in Chinese yuan (routed through Kunlun Bank via CIPS, outside SWIFT) and in bitcoin or stablecoins since mid-March. Iran’s parliament formally codified the system in the Strait of Hormuz Management Plan on March 30. This is a proof of concept for sanctions evasion at sovereign scale, converting a pre-existing cryptocurrency sanctions-evasion rail into a real-time revenue collection mechanism. Companies with supply chains transiting the Gulf need to model not only the direct cost but the compliance risk. Paying a toll to a sanctioned entity in cryptocurrency may itself violate sanctions regimes, creating legal exposure for any shipping company that participates. The 30% of globally traded fertilizers that normally transit Hormuz adds an agricultural supply chain dimension that most energy-focused analyses are missing.

For Policy Makers: The Hormuz toll is the most aggressive challenge to the petrodollar system since its inception in the 1970s. For over fifty years, global oil trade has been denominated in U.S. dollars, creating permanent demand for the currency and underwriting America’s ability to run persistent deficits and fund its military. Iran is now demanding payment in bitcoin and yuan, deliberately routing energy revenue outside the dollar system. China, which buys more than 80% of Iran’s oil exports at discounted yuan-denominated rates, is a direct beneficiary. Al Jazeera reported that the war has done little to disrupt oil flows between Iran and China, which remain at pre-conflict levels. Beijing’s CIPS network (its SWIFT alternative) has tripled in volume since 2020 to $28 trillion in transactions. The Hormuz crisis is accelerating an existing de-dollarization trend rather than creating a new one, but the acceleration matters. A critical legal distinction also applies: unlike the Suez Canal or Panama Canal, which are man-made waterways whose builders and operators have a legitimate basis for charging transit fees to recover construction and maintenance costs, the Strait of Hormuz is a natural waterway. No state built it. Under the UN Convention on the Law of the Sea, vessels have the right of transit passage through international straits. Iran’s toll claim has no legal precedent, which is precisely why the UN’s International Maritime Organization called it dangerous. If this precedent survives negotiations, it could embolden any state bordering a natural chokepoint to claim monetization rights.

For Investors: Oil is trading above $95 per barrel and flirted with $100 on April 10. The ceasefire has not delivered the supply relief markets priced in. With peace talks in Islamabad still in early stages and Israel continuing strikes in southern Lebanon (which Iran says must be included in any deal), the risk premium is not going away. For cryptocurrency markets, the implications cut both ways. Iran’s successful use of bitcoin as a sovereign payment rail at chokepoint scale strengthens the bull case for bitcoin as a censorship-resistant settlement layer that functions outside state control. But it simultaneously increases regulatory crackdown risk: Western governments are unlikely to tolerate a mechanism that lets sanctioned states monetize maritime chokepoints in untraceable digital currency. Investors with crypto exposure should watch for accelerated sanctions enforcement targeting cryptocurrency payment rails. For the broader energy and currency complex, the petrodollar erosion trend is investable: an estimated 20 million barrels per day of crude oil is now settled outside the dollar, up from 0.3 million in 2018. Gold, hard commodities, and non-dollar settlement infrastructure are the beneficiaries of a slow shift that the Hormuz crisis is compressing into months.


Why AI Cannot Fix Beijing’s War Room

China is racing to deploy AI decision-support systems across its military precisely because the PLA’s command culture inhibits the kind of battlefield delegation that effective operations require. A new RAND framework and a Georgetown CSET analysis of 2,857 PLA procurement contracts reveal a tension that no algorithm can resolve: the same centralized system that distorts information flowing to Xi Jinping is now layering AI on top of those distorted information flows.

Briefing: China’s military is pursuing AI-enabled command and control with extraordinary urgency. A Georgetown CSET analysis of thousands of open-source PLA procurement requests from 2023-2024 found the military seeking AI decision-support systems, sensor fusion algorithms, and autonomous targeting capabilities with acquisition timelines of just three to six months. A March 2026 PLA Daily report confirmed AI deployment across battlefield perception, decision support, and autonomous control systems. The reason for the urgency is revealing: PLA leaders value AI decision-making because most of their personnel lack battlefield experience, and the military’s centralized command culture inhibits the delegation that modern warfare demands. As one Georgetown researcher noted, there is a tendency not to take ownership for decisions in the PLA, creating strong interest in using AI to automate some of that decision-making.

But a separate RAND report published in February 2026 exposes a tension that AI cannot resolve. RAND’s framework identifies three building blocks of national security decision-making (information, analysis, and authorities) and maps them onto China’s system. The findings are sobering. Information routinely gets politicized before reaching senior leaders: during the EP-3 incident, the PLA gave CCP leadership an inaccurate account of the collision, prompting demands for a U.S. apology that had to be walked back. During COVID-19, local officials suppressed outbreak data to avoid political consequences. Chinese leaders chronically assume worst-case U.S. intentions, with the 2020 October Surprise crisis demonstrating how extraordinarily thin evidence (an Air Force unit’s patch, a newspaper op-ed) can trigger genuine war fears when filtered through confirmation bias.

The geotech implication is direct: if the PLA layers AI decision-support systems on top of information flows that are already politicized and distorted, the AI will optimize for speed and confidence on corrupted inputs. Recorded Future’s analysis found the PLA adapting foreign LLMs (including Meta’s Llama) for military intelligence tasks, but researchers warned that models trained on ideologically biased analytical products risk reducing, not improving, the objectivity of intelligence analysis.

So What

For Executives: If you operate in or are exposed to U.S.-China tension, the practical takeaway combines both reports. Signals your company or government sends to Beijing may not reach the intended audience because information gets filtered and distorted as it moves up the chain. But those same distortions are now being fed into AI systems designed to accelerate decision-making. The risk is not that China’s AI makes bad decisions slowly; it is that AI makes bad decisions faster. Direct communication with senior Chinese counterparts is more valuable than relying on intermediary channels. Assume that bad news about your operations in China will be slow to surface internally, and that AI-accelerated threat assessments based on thin or politicized intelligence could generate rapid, disproportionate responses.

For Policy Makers: The PLA is building AI for command and control in part because its centralized command culture makes human delegation unreliable. But the RAND report shows that centralization is itself the source of the information distortions that degrade decision quality. AI does not fix a system where subordinates fear punishment for sharing bad news; it processes the sanitized version faster. During the Iran war, with U.S. naval assets operating near Chinese commercial shipping in the Gulf, the risk of AI-accelerated misperception is elevated. The unauthorized action pattern, where lower-level PLA actors take escalatory steps without senior approval, is particularly concerning when those actors have AI tools that increase their confidence in flawed assessments. Bilateral communication channels between U.S. and Chinese military AI programs do not currently exist and should be established.

For Investors: China’s AI-military integration creates a dual investment signal. The PLA’s procurement urgency (three-to-six-month timelines) validates the defense AI thesis for companies in the allied ecosystem. But the decision-making distortions RAND documents mean that crises involving China can escalate faster and more unpredictably than rational-actor models suggest. For portfolio construction, this argues for higher geopolitical risk premiums on assets exposed to the Taiwan Strait and South China Sea corridors. It also argues for skepticism toward the assumption that economic interdependence will prevent conflict: China’s leaders may receive distorted AI-augmented assessments of the economic costs of their own actions.


The Open-Clawification of Everything

OpenClaw, the most-starred open source project in GitHub history, represents the first mass deployment of autonomous AI agents operating beyond any single company’s control. China has banned it from state agencies. Its creator joined OpenAI. And the incidents, from autonomous dating profiles to data exfiltration, preview the governance crisis that agentic AI will create at scale.

Briefing: OpenClaw is a free, open-source autonomous AI agent that executes tasks through large language models using messaging platforms as its interface. Developed by Austrian engineer Peter Steinberger and first published in November 2025 under the name Clawdbot, it was renamed after Anthropic filed trademark complaints. Within months it became the most-starred project in GitHub history. Nvidia CEO Jensen Huang called it “probably the most important software ever released.” On February 14, Steinberger announced he was joining OpenAI, and a non-profit foundation would steward the project going forward.

The platform connects to any major LLM and grants the AI broad access to the user’s local system — file read/write, shell execution, browser control, and email access. Its skills ecosystem on ClawHub now exceeds 13,700 third-party plugins. A computer science student configured his agent to explore its capabilities and later discovered it had autonomously created a dating profile on the MoltMatch platform and was screening potential matches without his direction. In early 2026, Cisco’s security team found third-party skills performing data exfiltration. Researchers estimate that roughly 20% of skills carry unvetted security risks.

China restricted state-run enterprises and government agencies from running OpenClaw on office computers in March 2026, citing security risks. Yet local Chinese governments in tech and manufacturing hubs simultaneously announced incentives to build commercial ecosystems around it. Tencent and Z.ai launched OpenClaw-based services. This is not an accidental contradiction. China’s party-state routinely applies different rules to state security (where information control is paramount) and commercial innovation (where speed matters). Central authorities restrict OpenClaw to protect state secrets and prevent foreign intelligence collection through unvetted third-party skills. Local governments promote it because agentic AI platforms drive commercial competitiveness and tax revenue. Beijing runs the same playbook with every dual-use technology, from drones to open-weight LLMs.

So What

For Executives: As an open-source, uncontrolled version of the agentic capabilities that Anthropic (Cowork, Claude Code), OpenAI (Operator), and Perplexity (Computer) are deploying inside walled gardens, OpenClaw represents the moment agentic AI escaped corporate control. When Anthropic ships Cowork, it controls the guardrails, the access permissions, and the audit trail. When an employee installs OpenClaw on a work laptop and connects it to Claude or GPT via API key, there is no corporate oversight layer. The security gap is not theoretical: Cisco documented data exfiltration through third-party skills, and one in five skills on ClawHub carries unvetted risk. The practical question for every CISO and CTO is whether OpenClaw or similar agent frameworks are already running inside your organization, because the answer is almost certainly yes if your workforce includes developers or technical staff.

For Policy Makers: OpenClaw exposes a gap in every existing AI governance framework. The EU AI Act regulates AI systems by intended use and provider. OpenClaw is a routing layer with no intended use and no single provider. It connects to whichever model the user chooses and executes whatever the model tells it to do. Regulating it would require governing the framework-model-skill combination, not any single component. China’s response is the most instructive to date, though it should not be mistaken for a regulatory framework in the Western sense. The central government’s ban on OpenClaw in state agencies is an exercise in party-state information control, not technology regulation. It reflects the same instinct that drives China’s Great Firewall: protect the regime’s information perimeter while letting commercial actors operate with relative freedom inside it. No Western government has yet engaged with the question of how to govern open-source AI agents at all.

For Investors: Separately, Gallup data from a survey of 1,572 Americans aged 14 to 29 shows that while weekly AI use holds steady at 51%, excitement has dropped from 36% to 22% in one year and anger has risen 9 points to 31%. The resentment is directed at AI tools broadly, not at OpenClaw specifically. A separate Writer/Workplace Intelligence survey of 2,400 knowledge workers found 44% of Gen Z employees report acts of resistance against employer AI mandates, from refusing to use approved tools to deliberately generating low-quality outputs. The motivations are mixed: 30% cite fear of job replacement, 28% cite security concerns, and 26% cite poorly executed company AI strategy. Notably, 60% of executives in the same survey said they plan to cut employees who refuse to adopt AI, suggesting an escalating cycle in which mandates generate resistance and resistance triggers threats. Investors should treat workforce AI resistance as a leading indicator for implementation failures and lagging ROI, particularly in sectors with high concentrations of early-career knowledge workers where mandates outpace training and change management.


Quantum’s Q-Day Moves Closer

Three research papers in three months have dramatically reduced the estimated quantum computing resources needed to break the encryption protocols that protect the global financial system. Google has moved its internal post-quantum migration deadline to 2029. The question is no longer whether quantum computers will break current encryption, but whether organizations can migrate before they do.

Briefing: In fewer than 12 months, three research papers have collapsed the estimated quantum resources needed to break widely used encryption. Previous estimates required 20 million physical qubits to crack RSA-2048; a CalTech/Oratomic paper now puts the threshold as low as 10,000 qubits using neutral atom arrays, potentially operational by end of decade. A Google Quantum AI paper, co-authored with the Ethereum Foundation and Stanford, showed elliptic curve cryptography protecting Bitcoin, Ethereum, and virtually every digital signature could be broken with fewer than 500,000 physical qubits in minutes, a 20-fold reduction from the prior best estimate. Google has moved its own internal PQC migration deadline to 2029.

Scott Aaronson, one of the world’s leading quantum computing theorists, compared the urgency of cryptographic migration to how nuclear fission research went classified between 1939 and 1940. The FBI, NIST, and CISA have designated 2026 the Year of Quantum Security.

So What

For Executives: The action item is a cryptographic inventory. Identify every system in your organization that uses RSA, elliptic curve cryptography, or Diffie-Hellman key exchange. Prioritize data and systems with long confidentiality horizons. Anything that must remain secret into the 2030s is already at risk from harvest-now-decrypt-later attacks, in which adversaries collect encrypted data today with hopes of decrypting it once quantum computers mature. NIST has finalized post-quantum cryptography standards (FIPS 203, 204, 205). The technology to migrate exists. The question is whether your organization starts now or waits until the window narrows further. Ensure that you have board-level expertise in quantum, cryptography, and AI/data, as these areas are fast becoming strategically critical to your organization. Augment or refresh management and supervisory board composition.

For Policy Makers: Most government PQC migration deadlines target 2035. Google’s 2029 target suggests the private sector believes that is too late. Some expert voices are citing 2027 as the better date. The gap between corporate and government timelines creates a transitional period in which government systems might be more vulnerable than the commercial infrastructure they are supposed to protect. The fallout will affect every government. Iran is demanding tolls paid in bitcoin, the very asset class whose cryptographic foundation is most immediately threatened by these quantum advances. The U.S. government and other state actors that built sanctions-evasion infrastructure on elliptic curve cryptography are building on a foundation with a defined expiration date. Those in government tasked with private sector cybersecurity alliances and ecosystems should intensify and shorten collaboration cycles, gaining leverage by instilling a sense of urgency among private sector management, private equity investors, and insurance executives.

For Investors: The post-quantum cryptography migration represents a multi-year, multi-billion-dollar infrastructure upgrade cycle. Every bank, every cloud provider, every telecommunications network, and every government agency will need to replace cryptographic systems. Companies already positioned in this space will benefit from accelerated procurement timelines. The cryptocurrency market faces a more existential question. Bitcoin’s governance model is notoriously resistant to protocol changes, and estimates suggest a full migration to post-quantum cryptographic standards could take five to 10 years even after the community reaches consensus to begin. If a cryptographically relevant quantum computer arrives by 2030, as some researchers now consider plausible, Bitcoin’s upgrade timeline may not keep pace, exposing wallets with visible public keys to theft and potentially triggering a confidence crisis in the asset class. The clock is now running. Investors and insurance executives should look at their portfolio exposure and discern which assets are well prepared or exposed and derive conclusions about increased beta. Venture capitalists might also find opportunity among quantum encryption investments with potential for increasing alpha.

Under the Radar

The deep analysis that connects the dots

The Mythos Escalation: When Regulators Treat AI as Systemic Financial Risk

The Treasury Secretary and Fed Chair summoning bank CEOs to discuss a single AI model’s capabilities is unprecedented. AI capability is now being treated as a systemic financial risk variable, the same regulatory framework used for derivatives, leverage, and interconnected bank exposures.

THE BRIEFING

On April 7, Treasury Secretary Scott Bessent and Fed Chair Jerome Powell assembled the CEOs of nearly every major U.S. bank to discuss the cybersecurity risks posed by Anthropic’s Mythos model, which autonomously finds and chains zero-day vulnerabilities across every major operating system and browser, including bugs that automated tools missed for decades. The scale of the threat sparked an immediate and unprecedented regulatory response, making the first time Treasury and Fed leaders convened bank executives specifically to discuss an AI model as a risk to financial stability. Are the banking system’s defenses adequate against a model that can find and exploit vulnerabilities at machine speed?

The meeting occurred against a backdrop of escalating tensions between Anthropic and the U.S. government. The Pentagon has designated Anthropic a supply chain risk and barred its use by government agencies and contractors, a designation Anthropic is fighting in federal court. The Defense Department has continued to use Claude during the Iran war despite the blacklisting. Anthropic is reportedly eyeing an October 2026 IPO at a $380 billion valuation, with annualized revenue surpassing $30 billion.

SO WHAT

For Executives: The Bessent-Powell meeting establishes a precedent that AI model capability is now a variable in financial stability assessments. Previously, AI appeared in financial regulation as an operational risk (e.g. algorithmic trading failures or model bias in lending). Now, it appears as a systemic risk, in which a single model’s capabilities can threaten the entire sector’s infrastructure. That’s especially concerning because Anthropic is the only foundation model provider openly declaring capabilities, and similar capabilities could be under wraps at other entities in the U.S. and elsewhere. Enterprises in any industry — but especially regulated sectors of strategic or infrastructural importance to the economy and society — should expect supervisory inquiries about their exposure to frontier AI models, both defensive (are you using them to find vulnerabilities?) and offensive (are you protected against adversaries who are?). Expect inclusion of AI models in stress tests and additional requirements for red-teaming, incident wargaming, etc.

For Policy Makers: There is no regulatory mechanism that compels a frontier lab to restrict access to a cyber-capable model the way Anthropic voluntarily did. If a rival lab ships comparable capabilities without guardrails, existing rules provide no recourse. The EU AI Act classifies risk by intended use, but Mythos’s cyber capabilities emerged as a downstream consequence of general improvements in reasoning and code generation, not from specialized training or narrowly defined inference use cases. Regulating by intended use will not catch emergent capabilities. The Bessent-Powell meeting suggests regulators are aware of the gap but have not yet closed it. A concrete starting point: require frontier labs to notify financial supervisory bodies before releasing models that exceed defined cyber-capability thresholds, analogous to how pharmaceutical companies must notify regulators before clinical trials of compounds that cross potency benchmarks. Pair notification requirements with mandatory red-team reporting and time-bound patching windows for vulnerabilities discovered during controlled access periods. Regulators should also design frameworks for institutional stress-testing of AI-driven cyber exposure, rapid mitigation procedures, and guardrails for conditional model access to sensitive data within critical infrastructure providers such as banks, energy utilities, healthcare systems, and transportation networks.

For Investors: Anthropic’s legal battle with the Pentagon and its simultaneous courtship of Wall Street through Glasswing is one of the more unusual corporate positioning exercises in recent memory. The IPO narrative writes itself: $30 billion in revenue, the model that required a Treasury-Fed emergency meeting, and a partner list (AWS, Apple, Google, Microsoft, JPMorgan, Nvidia) that doubles as a reference portfolio. The question is whether the Pentagon blacklisting or the model’s own dual-use nature creates regulatory or reputational risk in the form of government responses and guardrails that discount the valuation. Conversely, guardrails could reduce the risk of backlash for Anthropic and other model makers, as well as for those applying the models. For cybersecurity incumbents, the near-term read is positive, especially against the recent bad news of Q-Day approaching faster than expected. Palo Alto Networks rallied 6% on the Glasswing announcement. The longer-term question is whether AI models that find vulnerabilities at near-zero marginal cost restructure the entire threat landscape beneath the business models that the current cybersecurity industry is built on. Additionally, vulnerability identification without immediate patching creates actionable attack surfaces, so investors and insurance executives should pay attention to the speed with which remedies are put in place and flow this into risk and opportunity calculations.

Cambrian Partner By Invitation

Expert analysis from our global network

The Diplomacy Gap in Corporate Strategy

The International Energy Agency has called the Strait of Hormuz disruption the largest supply shock in the history of the global oil market. European gas storage sat below 30% capacity heading into spring. Brent crude traded above $120 a barrel and remains volatile.

Despite the severity of the Iran war’s ripple effects, one cannot read these as black swan events. They are the predictable consequences of structural dependencies that anyone trained to read the patterns could observe years in advance.

Diplomacy teaches precisely that type of pattern recognition, yet it remains one of the most underutilized assets in corporate strategy today. While most corporate risk frameworks are built to look backward, diplomacy trains you to look forward.

After 30 years negotiating climate agreements, energy security, and multilateral frameworks at the UN, G7, and bilateral level, I’ve come to believe that the analytical toolkit of diplomacy is one of the most underutilized assets in corporate strategy today. Of course, business and diplomacy are not the same, but the problems now landing on executive desks – geopolitical fragility, regulatory reordering, supply chain resilience – are precisely the problems diplomats have always been paid to anticipate, not just react to.

The conflict involving Iran, the U.S., and Israel raises questions with which diplomats have grappled for years. We were reminded of that again after Russia’s invasion of Ukraine in 2022, an offensive that underscored the obvious lesson that dependence on hostile or coercive energy suppliers is dangerous. That lesson was not fully adopted, so now we are learning it again at greater cost.

A few things diplomatic experience teaches that are directly relevant to boardrooms right now:

Risk anticipation is a discipline, not an instinct. Conflicts and crises rarely arrive without warning. They accumulate in patterns that can be read if you know what to look for.

Every negotiation has two layers. What people say they want is one thing. Why they want it is the second, deeper layer where deals are actually made and where strategic miscalculation most often happens.

Geopolitics has re-entered the cost of capital. Boards that still treat it as a communications issue are misallocating attention at exactly the wrong moment.

The signals were there, but the translation layer was missing. To read the full version of this article please see here.

About our partner

Hinrich Thölken is a former Digital Ambassador and Climate and Energy Ambassador of the German Foreign Service. He now serves as Executive Vice President and Sustainability Lead at Capgemini, a global consulting and IT services company that helps organizations with digital transformation, technology solutions, and business innovation. He is a member of the Cambrian Futures Network. You can reach him on LinkedIn: www.linkedin.com/in/hinrich-thoelken/

About Cambrian

Cambrian Futures is a strategic foresight and advisory firm helping government, business, and technology leaders understand how emerging technologies intersect with geopolitics, markets, and national strategy. By combining rigorous research, AI-enabled analysis, and human expertise, Cambrian provides clear insight into global technology trends, risks, and power dynamics. Its work helps decision-makers anticipate disruption, manage uncertainty, and act with strategic confidence in an increasingly competitive GeoTech world.

PRODUCTION TEAM

GeoTech Radar is produced by the Cambrian Futures Insights Platform team:

Olaf Groth
Olaf Groth, PhD
CEO & Chief Analyst
Timothy Bishop
Tim Bishop
Managing Director / Producer, Insights Platform
Olga Palma
Olga Palma
Global Lead, Smart Infrastructure Strategy
Dan Zehr
Dan Zehr
Editor in Chief

Learn more about Cambrian Futures at cambrian.ai

Produced with

Human Led

Human Led

Design
Human Led + AI Augmented

Human Led +
AI Augmented

Ideation Data Analysis Writing
AI Led + Human Verified

AI Led +
Human Verified

Data Collection Visuals

Cite as: Cambrian Futures (2026) 'GeoTech Radar Issue 15'