Why this story matters (even to payments)
In an article last week, the Financial Times reported that more than 800 public figures, from AI pioneers Geoffrey Hinton and Yoshua Bengio to Stephen Fry, Mary Robinson and Prince Harry, havesigned a statement urging a prohibition on developing “superintelligence”, defined as systems “more intelligent than most humans.” The Future of Life Institute (FLI), which coordinated the effort, stresses this is not a pause on all AI, but a call to stop one specific class of systems until there is broad scientific consensus on safety and control, and strong public buy-in.[1][2]
We do not take a position on that prohibition. But we do think the letter raises a universal engineering question that does apply to finance: how much control and accountability should be designed into powerful systems before they are scaled? In other words, control is not a press release, it is an architecture.
Control begins as an architectural choice
Three themes in the superintelligence debate map directly to financial infrastructure:
- Explicit consent and verifiability
The FLI statement’s emphasis on public buy-in mirrors what good financial rails already require: consent you can prove, and revocation that actually works. In open banking, requests are permissioned, auditable and bounded by standardised flows, with authentication handled by the user’s bank using familiar factors (e.g., device possession plus biometrics).[3][4] Consent is not a banner, it is a protocol. - Accountability you can allocate
The letter’s central anxiety is “loss of control.” Financial systems face a humbler version of that: who is liable when something fails? The UK’s open banking work has repeatedly tied access to clear roles (AISP, PISP, ASPSP) and liability frameworks, precisely so that accountability is not diffuse when data or payments move across firms.[5] If responsibility cannot be assigned, it has not really been designed. - Governance that scales with risk
The EU’s AI Act (rolling in through staged application) is a reminder that governance can be risk-based and phased, tightening requirements as capabilities and impact rise.[6][7] Financial regulation already does this: higher scrutiny and controls when systemic impact grows. Governance is most credible when it scales predictably with risk, not reactively with headlines.
Lessons for people building critical systems
The superintelligence debate is emotive, but its most practical lesson is quiet: design in control before capability makes it non-negotiable. For engineers and product leaders in finance, that translates to:
- Put the trust anchor where users already authenticate
In open banking payments, the user approves at their bank. This reduces attack surface and aligns security with established rituals, rather than inventing new ones the user must learn under stress.[3:1][4:1] - Constrain power with defaults, not exceptions
Strong defaults (least privilege, explicit consent scopes, revocation paths) scale better than complex exception stacks. When exceptions do exist (e.g., certain VRP use-cases), they should follow clearly published parameters, monitored and revisable. - Make audit a first-class feature
If you cannot reconstruct “who did what, when, under which consent,” you do not have a trustworthy system. Logging, non-repudiation, and evidencing SCA are not compliance overhead, they are features customers rely on when trust is tested. - Assume governance will tighten
Whether in AI or finance, controls tend to converge on risk. Building observability, rate-limiting, key rotation, kill-switches and segregation of duties early lowers the cost of future regulatory change.
A neutral stance, a firm conviction
Asima does not campaign on AI policy. Our work is payment and data infrastructure. But the ethic of “control by design” is shared. The public letter highlights a fear of scaling capability faster than society can verify it is controllable. Financial infrastructure has learned (sometimes the hard way) that verification, consent, authentication and liability must be settled in the design, not after deployment.
That does not mean innovation slows to a crawl. It means architecture does more of the safety work, so new capability does not rely on improvised guardrails. In that spirit, we will continue to build systems that make consent explicit, authentication inherent, and accountability legible.
- Financial Times, “Steve Bannon and Meghan Markle among 800 public figures calling for AI ‘superintelligence’ ban,” 23 Oct 2025. (Reporter: Cristina Criddle). Link
- Future of Life Institute, “Statement on Superintelligence” (live statement text and rationale). Link
- Open Banking UK, Customer Experience Guidelines: Payment Initiation Services (v3.1.x) — PSU consent, redirection/decoupled authentication with the ASPSP. Link
- Open Banking Limited, Customer Experience Guidelines v3.1.3 (PDF) — secure journeys, authentication at the ASPSP and evidence of consent. Link
- UK Finance, The future development of Open Banking Payments — roles, onboarding and liability considerations for open banking payments. Link
- European Parliament Research Service, EU AI Act implementation timeline — general application from 2 Aug 2026 with phased obligations; full effect by 2027. Link
- AI Act Timeline (independent tracker) — summary of staged obligations and key dates. Link