GeoTech Radar - Fusion of Geopolitics & Technology
Home Subscribe
The Stack Is the Target. The Courtroom Is the Battlefield. The Middle Powers Are the Opportunity.

The Stack Is the Target. The Courtroom Is the Battlefield. The Middle Powers Are the Opportunity.

IN THIS ISSUE:

CEO'S PERSPECTIVE
On the Radar
Under the Radar
Cambrian Partner By Invitation

CEO's Perspective

Strategic outlook from Cambrian leadership

Olaf Groth

What happens when a technology becomes a load-bearing pillar for society, but the people who run it do not see or refuse to accept that fact? Regardless of industry or sector, too many leaders still treat AI as a competitive tool. Only a handful of the wisest have started to ask what AI as critical infrastructure actually demands of its stewards. They need more allies to ponder questions about accountability, resilience, and responsibility if – or when – these load-bearing technologies fail.

The Nuclear Signal

For two decades, the tech industry spoke about compute as if electrons were free. The growing demands for power make it increasingly expensive, politically contested, and strategically consequential. When Vietnam signed with Rosatom to power its digital economy, it made a 40-year alignment decision, not just an electricity supply agreement. The reactor vendor now becomes the gatekeeper for decades of maintenance, fuel supply, and training relationships. That should sharpen the mind of every executive building data center capacity in Southeast Asia. As the nuclear sprint signals, investment memos still understate the long-term scale of the AI buildout. You do not build nuclear reactors to power incremental workloads.

Demographics as Destiny

A RAND study on China's demographic crisis deserves more than a passing read. As it suggests, Beijing's investment in AGI, robotics, automation, and compute-as-utility is a structural necessity, not just geopolitical ambition. A country losing millions of working-age adults per year has no viable alternative to replacing human labor with machine labor at scale. That commitment is demographically locked in despite changes in leadership, trade conditions, or diplomatic climate. Companies competing with Chinese firms in industrial automation, elder care technology, and workforce management need to evaluate AI competition through a rivalry lens, of course, but they also need to consider the demographic imperative they are racing against. Those are different races with different durations.

AI Power as Responsibility

Two jury verdicts in two days did what a decade of regulatory debate could not –  establish that algorithmic design is a product liability question. Courts essentially ruled that companies cannot hide behind Section 230 for engineering decisions that deliberately hook children, and roughly 2,000 pending cases will test how far that logic extends. The questions of accountability could grow even sharper for national defense. Record Pentagon R&D budgets and the fastest VC exit environment in defense tech history are accelerating AI's integration into targeting, logistics, and command systems. The tools are advancing faster than the doctrine governing them. None of the world’s major militaries have identified yet who bears responsibility when an AI-enabled system fails, or succeeds in ways its designers did not anticipate. The trillions in dollars of capital pouring in will not answer it either. Only an architecture of deliberate, human oversight will.

The Layer Nobody Audited

The TeamPCP attack on LiteLLM was a reminder that the AI stack has a substrate most executives have never examined. The libraries, wrappers, and orchestration tools that sit between your applications and every LLM provider you use has become a target. The Iran-specific wiper payload confirms that supply chain weaponization has become a geopolitical instrument, not merely a criminal one. Treat your software supply chain with the same scrutiny you apply to your physical one.

The leaders I most respect are not the ones who build fastest. I admire the ones who understand, before they build, what they are building into. That understanding has become a competitive advantage. Before long, it will be a compliance requirement. The distinction between the two is narrowing faster than most boards have noticed.

Olaf

On the Radar

The signals affecting the GeoTech landscape this week

Poisoning the AI Stack: An AI Supply Chain Worm Targets the Infrastructure That Builds and Secures AI

A hacking group called TeamPCP compromised some of the most widely used tools in the AI development ecosystem, including a library that sits between enterprises and every major large language model (LLM) provider. The attacks weaponize open-source trust and include a geographically targeted wiper aimed at Iranian systems.

BRIEFING: A cybercriminal group tracked as TeamPCP launched a series of cascading supply chain attacks that compromised critical developer infrastructure across the AI ecosystem. The campaign began with a breach of one of the most widely deployed open-source security tools in cloud environments, but the most consequential attack hit LiteLLM, a popular Python library that serves as a unified interface between enterprise applications and multiple LLM providers, including OpenAI, Anthropic, Google, and Azure. The initial compromise gave attackers a potential pathway to intercept and exfiltrate secrets across an entire AI stack.Subsequent incursions included a payload called Kamikaze, a data wiper that exclusively targets machines geolocated to Iran. On Kubernetes clusters in Iran, it deployed a DaemonSet that wiped every node. And on standard machines, it executed a full system deletion. Aikido Security researcher Charlie Eriksensaid there is no confirmed damage to Iranian machines yet, but the wiper had clear potential for large-scale impact if it achieved active spread.

SO WHAT FOR LEADERS: By moving deeper into the upstream stack of the supply chain, this cyberattack marked a shift in supply chain risk that was previously reserved primarily  for oil and gas industry and government breaches. Previous attacks targeted end-user software or operating systems, but TeamPCP targeted the tools that enterprises use to build, secure, and deploy AI systems. LiteLLM’s position in the stack is particularly concerning. As the connective tissue between applications and LLM providers, vulnerability here means a single compromise can expose API keys and credentials across every model provider an organization uses. The immediate action is to audit your AI middleware layer. Identify every library, wrapper, and orchestration tool that sits between your applications and external AI services. Pin dependencies to verified versions. Rotate any API keys or cloud tokens that may have been exposed through CI/CD pipelines that installed compromised packages. The Iran-specific wiper payload signals that open-source package registries are now a vector for geopolitically targeted destruction, not just espionage or financial crime. Treat your software supply chain with the same scrutiny you apply to your physical supply chain. And most importantly, avoid the curse of speed. As Jason Clinton of Anthropic said at this week’s RSA Conference in San Francisco, get the highest level view of our shared AI System of Systems and partnerships with other actors to see the vulnerabilities, then model them with AI. Stay ahead not just on speed, but perspective.

Big Tech’s Big Tobacco Moment: Two Juries Break Section 230’s Shield

In back-to-back verdicts on March 24 and 25, juries in Los Angeles and New Mexico found Meta and YouTube liable for addictive platform design and failing to protect minors. The rulings bypass the tech industry’s longstanding legal shield by targeting product design, not user content. More than 2,000 similar cases are pending.

BRIEFING: A Los Angeles County Superior Court jury found Meta and YouTube negligent in the design and operation of their social media platforms on March 25, concluding that both companies deliberately built addictive products and that executives knew this and failed to protect young users. The jury awarded $6 million in combined compensatory and punitive damages. Internal Meta documents revealed the company aimed to bring users in as tweens despite requiring users to be at least 13 years old. One day earlier, a New Mexico jury ordered Meta to pay $375 million for violating consumer protection laws by failing to protect minors from predators on Instagram and Facebook. Both are bellwether cases from a consolidated docket of thousands of lawsuits. The legal strategy centered on product liability rather than content moderation, arguing that design features including algorithmic recommendations, beauty filters, and infinite scroll constituted a defective product. This framing bypasses Section 230, which shields platforms from liability for user content but not product design. Both companies plan to appeal.

SO WHAT FOR LEADERS: The product-design framing is the breakthrough. For two decades, Section 230 of the Communications Decency Act shielded platforms from accountability for what users posted. By contrast, these verdicts establish that how a platform is engineered can create liability independent of content. Any company whose products use algorithmic engagement, recommendation systems, personalization, or behavioral nudges now faces a new class of design liability risk that has growing momentum in other countries, too. In December, Australia banned children under 16 years old from social media. The EU Digital Services Act imposes algorithmic transparency requirements. And the UK Online Safety Act gives regulators power to compel design changes. Treat algorithmic design decisions as regulatory and legal risk decisions, not just product decisions. If your products serve minors or use engagement-maximizing features, map those features against emerging product liability standards in every jurisdiction where you operate. Project algorithmic impact out through a simulated stakeholder system to ascertain second and third order effects. 

The Middle Power Playbook: How ASEAN, India, and Japan Are Building AI Sovereignty Without Picking Sides

ASEAN is finalizing the world’s first regional digital economy agreement, covering 680 million people. Indonesia is building its own LLM in Bahasa. India hosted the first AI governance summit outside the West. Japan’s new government is using industrial policy to accelerate domestic AI. Together, they represent a $2 trillion market, and they are not picking sides between Washington and Beijing.

BRIEFING: While the global AI narrative remains dominated by the U.S.-China rivalry, a coalition of middle powers continues to build a different model. At the ASEAN Digital Ministers’ Meeting in Hanoi in January 2026, the bloc adopted the ASEAN Digital Masterplan 2026-2030 and advanced negotiations on the Digital Economy Framework Agreement (DEFA), the world’s first comprehensive regional digital economy treaty. ASEAN expects to sign DEFA by the end of 2026. The agreement covers cross-border data flows, digital payments, cybersecurity, digital identity, and AI governance across a market of nearly 680 million people. The World Economic Forum and BCG estimate that DEFA could double the region’s digital economy to $2 trillion by 2030

Individual ASEAN members are pursuing distinct strategies. Indonesia is building a large language model in Bahasa through a partnership between Nvidia and Indosat’s Sahabat-AI initiative. Malaysia is positioning itself as the region’s AI compute hub with investments from Microsoft, Google, and Nvidia. Thailand is offering guaranteed low-cost renewable electricity for data centers. The Philippines, as 2026 ASEAN chair, is pushing a binding AI regulatory framework for the bloc. A February 2026 Chatham House study on middle power AI strategies describes this general approach as choosing where to build independent capabilities and where to maintain partnerships, rather than replicating the full AI stack.

Beyond ASEAN, the pattern is accelerating. India hosted the AI Impact Summit in February 2026, the first such gathering outside the West. Japan’s Prime Minister Sanae Takaichi is using industrial policy to advance domestic AI under a technological sovereignty agenda. South Korea is leveraging its semiconductor base to deepen AI investment. None are aligning exclusively with Washington or Beijing. They are building modular capabilities that allow participation in both ecosystems while retaining control over data and governance.

SO WHAT FOR LEADERS: The binary frame of U.S. versus China AI competition is obscuring a market opportunity that is forming in the space between. The countries pursuing selective AI sovereignty represent more than 3 billion people and some of the fastest-growing digital economies on earth. For technology companies, this means your go-to-market strategy in these regions cannot be a watered-down version of your U.S. or China playbook. These governments are demanding local, culturally calibrated language models, domestic data processing, regulatory compliance with regional frameworks, and genuine technology transfer. For investors, the middle power AI thesis is structurally underpriced and represents a triage opportunity, because capital allocation still follows the U.S.-China binary. Companies that build for ASEAN’s regulatory environment, India’s governance framework, and Japan’s industrial ecosystem will access markets that the frontier AI labs are not yet serving effectively.

The Defense Tech Gold Rush: Where the Next Trillion in Military Spending Is Headed

Global defense spending topped $2.7 trillion in 2024 and is projected to reach $3.6 trillion by 2030. An increasing share is flowing to electronics, startups, and R&D rather than traditional platforms. The Pentagon’s FY2026 R&D request jumped 27 percent to $179 billion. NATO’s new 5 percent GDP target could add hundreds of billions annually.

BRIEFING: Defense technology investment is accelerating across multiple capital channels. Total venture capital deal value in defense tech jumped to a record $49.1 billion in 2025, up from $27.2 billion the year before, according to PitchBook. That figure includes both pure-play defense startups and dual-use companies with defense applications. Separately, CB Insights, which uses a narrower classification, measured equity funding for defense tech startups at $17.9 billion, more than double the prior year. VC exits also hit a record at $54.4 billion. Private equity deal activity in defense electronics, which covers sensing, electronic warfare, resilient communications, and mission computing, jumped 93 percent year-over-year, according to PitchBook data.

Government budgets are accelerating the trend. The Pentagon’s FY2026 budget proposes $179 billion for Research, Development, Test, and Evaluation, a 27 percent year-over-year increase and one of the largest allocations in history. Total U.S. defense expenditures are expected to reach $1 trillion as early as fiscal year 2026, years ahead of earlier projections, after the One Big Beautiful Bill Act added $150 billion to the defense budget. NATO members committed to raising annual spending targets from 2 percent to 5 percent of GDP by 2035, a structural rearmament that could add hundreds of billions annually to combined military budgets. The European Commission reported that venture capital investments exceeded 5 billion euros in deep tech defense and security startups in 2024 alone. The capital is flowing to specific capability gaps, with drones, space systems, and defense electronics accounting for much of the scale-up funding. More and more, prime contractors are partnering with startups through minority stakes and targeted acquisitions rather than internal R&D.

SO WHAT FOR LEADERS: Defense modernization is no longer about bigger platforms, but about who controls the electronics, software, and AI layers inside them. For technology companies, this creates both opportunity and regulatory complexity. The 2026 Comprehensive Outbound Investment National Security (COINS) Act, requires U.S. residents and companies to notify the Treasury Department about foreign investments in technology sectors of concern, including AI, semiconductors, and quantum computing. Dual-use technology companies must navigate an expanding web of export controls, regulatory reviews, and outbound investment restrictions. For investors and boards, the defense tech thesis has shifted to a multi-year cycle of record government budgets, proven battlefield demand from Ukraine and the Middle East, and accelerating prime contractor acquisitions. Broaden your deal pipeline to allied countries and non-traditional defense spaces. At the same time, map your portfolio’s exposure to defense-adjacent technologies and understand where regulatory constraints on foreign investment could limit future flexibility.

The Nuclear Pivot: Southeast Asia Races to Build Reactors for AI Data Centers as the Iran War Rewrites Energy Security

Five ASEAN nations are reviving mothballed nuclear programs to power the region’s surging data center buildout, seeking to reduce energy dependence as the Iran war disrupts oil and gas supplies. Who builds the reactors – Russia, China, South Korea, or the U.S. – will shape digital sovereignty in the region for decades.

BRIEFING: Southeast Asia is pursuing nuclear energy with unprecedented urgency, driven by the explosive growth of AI-focused data centers and the Iran war’s disruption of global energy supplies. Five of the 11 ASEAN member states are actively pursuing nuclear programs, and nearly half the region could have operational reactors by the 2030s. Southeast Asia will account for a quarter of global energy demand growth by 2035, according to the IEA, and is projected to deliver nearly a quarter of the 157 gigawatts expected from newcomer nuclear nations by midcentury. The technology mix is evolving: Indonesia plans two small modular reactors (SMRs) by 2034, a newer reactor design that is smaller, faster to build, and potentially safer than conventional plants. China’s Linglong One SMR on Hainan Island is scheduled for operation in 2026, giving Beijing a first-mover advantage in exporting the technology. Vietnam, by contrast, is building conventional large-scale reactors with Russia’s Rosatom. Data center demand is the primary driver, with more than 2,000 facilities already operating across the region and many more in development. But the Iran war has intensified urgency by exposing the region’s dependence on imported fossil fuels through the Strait of Hormuz and the Red Sea.

SO WHAT FOR LEADERS: The strategic question is not whether Southeast Asia goes nuclear, but who builds the reactors. The vendor choice will create long-term dependencies that extend well beyond energy. Russia’s Rosatom is already building Vietnam’s plants and has supplied Bangladesh’s reactor. South Korea and the United States are competing for contracts but face longer timelines and higher costs. The nuclear vendor becomes a crucial gatekeeper, because reactor technology creates decades-long maintenance, fuel supply, and training relationships that give the supplier country  significant influence over the host nation’s digital infrastructure. For companies building or leasing data center capacity in Southeast Asia, the energy source is now a financial, economic, and a geopolitical variable. Understand which nuclear vendor your host country is partnering with and what that implies for long-term energy reliability, pricing, and political risk. For energy and infrastructure investors, Southeast Asia’s nuclear buildout represents a multi-decade capital deployment cycle. The region’s 2,000-plus existing data centers need reliable baseload power that solar and wind alone cannot provide at the required scale and consistency.

Under the Radar

The deep analysis that connects the dots

The Demographic Accelerant: Why China’s Population Crisis Is the Key to Reading Every AI and Robotics Play Beijing Makes

A new RAND Corporation study published in March 2026 documents what happens when the world’s second-largest economy and largest military enters demographic decline before reaching high-income status. According to the report, China’s military security faces a long-term recruitment challenge because of its aging population. Though the sheer volume of younger population cohorts eases near-term urgency, the People’s Liberation Army could eventually have trouble attracting the technically skilled recruits it needs for a technology-intensive modernization program when those same individuals could pursue more lucrative careers in the civilian economy. Furthermore, government finances will come under increased pressure, pension and healthcare costs will rise, and the economy faces generally negative effects from a shrinking workforce. Regime security depends on whether the Chinese Communist Party can simultaneously manage the needs of its aging population and the aspirations of its younger citizens without losing credibility.

All these challenges are underpinned by China’s contracting population, the RAND study said. The country could lose 250 million people by 2050, a sharp contraction from its 1.4 billion people today. Nearly one-third of the remaining population would be 65 or older. The United States, Japan, and most European countries aged after reaching high-income status, giving them fiscal resources to absorb the costs. China, by contrast, got old before it got rich, with GDP per capita of approximately $14,730, below the World Bank’s high-income threshold. The labor picture is counterintuitive: youth unemployment is high even as the overall workforce shrinks, because the economy is producing too few of the high-skilled technical jobs that young graduates want while simultaneously losing the manufacturing and service workers it needs. Growing numbers of young workers are opting out of China’s competitive corporate culture entirely. The military must compete with pension and healthcare systems for funding in what RAND describes as a potential zero-sum budgetary environment.

The GeoTech Signal. This demographic clock is the primary driver behind every other China story the GeoTech Radar covers. Previous Radars analyzed China’s 15th Five-Year Plan and its unprecedented inclusion of artificial general intelligence as a research target, computing power as a public utility, a revised immigration system as a talent pipeline, and humanoid robots as a dedicated line item. Read in isolation, these look like parts of an ambitious industrial policy. Read against RAND’s demographic projections, however, they look more like countermeasures to a very real geopolitical threat. A country losing millions of working-age adults per year has little choice but to replace human labor with machine labor at scale. The compute-as-utility framework ensures Chinese companies and researchers will have state-subsidized access to the AI infrastructure required to automate faster than the workforce shrinks.

For Leaders, this reframes China’s competitive posture. Western companies often evaluate Chinese AI ambitions through a lens of geopolitical rivalry or market share competition. That isn’t wrong, but it is too narrow. The demographic data suggests China’s AI investments are driven by internal economic and security necessity as much as external competition. This means Beijing’s commitment to automation, robotics, and AI will not weaken with changes in leadership, trade deals, or diplomatic climate. It is demographically locked in. Companies competing with Chinese firms in automation, robotics, industrial AI, elder care technology, and workforce management should plan for a competitor that treats these sectors as survival priorities. And the talent immigration system in the Five-Year Plan signals that China will increasingly recruit globally for the human capital it can no longer produce domestically, intensifying competition for AI researchers and engineers across every market. For investors, this could mean predictable demand shifts toward sectors such as longevity and health tech, home and logistics solutions, and robotics for manufacturing productivity and retail. Supported by demographic survival imperatives and policies, these sectors could potentially yield lower beta and higher alpha for portfolios, but that will also depend on more transparency in China’s financial markets and less political motivation in its regulatory enforcement. 

Cambrian Partner By Invitation

Expert analysis from our global network

Distributed AI Adaptation as a Geopolitical Risk Variable

On March 26, a federal judge in San Francisco temporarily blocked the U.S. Department of War from designating Anthropic as a supply chain risk. Within hours of the judge rejecting the government’s attempt to frame the company’s AI-governance stance as detrimental to national security, reports emerged that Anthropic is testing a new generation of AI systems with significantly expanded capabilities. Absent effective governance, the inherent risks of the new, more potent systems would expand, as well, especially if the systems are based on transformer technology. Here is why:

Recent debates on AI governance have focused on export controls, model access, and compute concentration. Yet a deeper structural issue remains underexamined. Modern AI systems do not operate through transparent, step-by-step reasoning. They optimize outcomes across vast data landscapes, identifying patterns beyond human intuition. As a result, their internal decision pathways remain difficult to interpret. This opacity is not a flaw; it is a structural consequence of probabilistic optimization.

AI systems do not need self-awareness to adapt. When designed to optimize performance under constraints, they automatically adjust as those constraints change. These adjustments may unfold gradually across time, infrastructure, and software updates – without any single visible decision point.

For business and policy leaders, this shifts AI from a technology debate to a structural governance issue as risk may accumulate without clear warning signals. By the time unintended behavior becomes visible, it may be embedded in core systems. Oversight applied after deployment cannot be the primary safeguard – architecture determines exposure.

The real governance question is architectural. How are constraints encoded? Where are decision boundaries set? What mechanisms limit adaptation before it scales across infrastructure? Recent developments suggest that such mechanisms may not yet be in place.

Because these dynamics are structural, no state, institution, or society strata is inherently insulated from their effects. In distributed AI environments, strategic advantage and systemic vulnerability can emerge from the same design choices.

The risk is not a black swan. It is a design feature, a probability distribution – already unfolding.

About our partner

Askar Sinchev is a technology and digital transformation professional working at the intersection of AI, big data and institutional-regulatory innovation. He serves as a consultant to the Executive Office of the President of the Republic of Kazakhstan, and contributes to international AI governance dialogue as an IVLP alumnus (U.S. Department of State) and a member of the Ethics and Regulation Board of the Global Alliance on AI for Industry (UNIDO).

Chinchalinova Aigul is an inspector at the Executive Office of the President of the Republic of Kazakhstan. Her work focuses on security in a broad institutional context, including risk governance, and the oversight of emerging technologies.

About Cambrian

Cambrian Futures is a strategic foresight and advisory firm helping government, business, and technology leaders understand how emerging technologies intersect with geopolitics, markets, and national strategy. By combining rigorous research, AI-enabled analysis, and human expertise, Cambrian provides clear insight into global technology trends, risks, and power dynamics. Its work helps decision-makers anticipate disruption, manage uncertainty, and act with strategic confidence in an increasingly competitive GeoTech world.

PRODUCTION TEAM

GeoTech Radar is produced by the Cambrian Futures Insights Platform team:

Olaf Groth
Olaf Groth, PhD
CEO & Chief Analyst
Timothy Bishop
Tim Bishop
Managing Director / Producer, Insights Platform
Olga Palma
Olga Palma
Global Lead, Smart Infrastructure Strategy
Hooriya Faisal
Hooriya Faisal
Research & Marketing Associate
Dan Zehr
Dan Zehr
Editor in Chief

Learn more about Cambrian Futures at cambrian.ai

Produced with

Human AI Research

Cite as: Cambrian Futures (2026) 'GeoTech Radar Issue 13'