09 APRIL 2026 · NEWS

Anthropic Managed Agents: universal safety, zero organisational policy

Anthropic shipped Managed Agents this month. Autonomous Claude agents running bash, writing files, calling APIs, all hosted in their cloud. Brilliant for developers. Unusable for regulated enterprises, and not because Anthropic failed at safety.

What Anthropic did well

Their safety layer is excellent. Managed Agents blocks universal catastrophic actions out of the box:

That is exactly what a model provider should block. It is the kind of safety that applies to every user, every organisation, every jurisdiction. It does not require any knowledge of your business.

What Anthropic cannot do

Anthropic cannot know your organisation's rules.

They will not block:

They cannot encode every enterprise policy into their guardrails. They should not try. The set of organisational rules is infinite, domain-specific, and changes per team, per project, per environment.

The gap

Tool execution in Managed Agents happens inside Anthropic's cloud container. Clients receive agent.tool_use events via an SSE stream after the action has started. There is no pre-execution hook. There is no way for an enterprise customer to say "stop this tool call before it runs because my policy says so".

For a startup shipping an AI product to end users, this is fine. Anthropic's safety layer is enough.

For a bank running agents against customer data, a hospital running agents on patient records, an insurer running agents on underwriting workflows, or any regulated business, it is a non-starter.

What HookBus does

HookBus is the organisational policy layer. Your rules, your knowledge base, your compliance requirements, your approval chains, your audit trail. Every enterprise needs a different set, and the only place to enforce them is external to the model.

For Managed Agents, HookBus ships a publisher adapter that intercepts the SSE event stream, routes tool calls through the bus, and lets subscribers react. When CRE denies a tool call, HookBus cancels the session mid-stream. Every action is logged with full metadata for audit.

For on-premise agents such as Claude Code, OpenAI Agents SDK, LangChain, or CrewAI, HookBus does full pre-execution gating. Same bus, same subscribers, same rules.

Universal safety is generic. Organisational policy is specific to you. Anthropic built the former. HookBus fills the latter. They are different layers. Both are needed.

The mental model

Think of it like airline security. The TSA blocks knives, guns, explosives. That is universal safety, applied to everyone. It does not know that your company requires employees to show ID at gate 7 before boarding the private flight to the factory. That is your organisation's policy. It runs on top of the universal layer, not as a replacement.

HookBus is your organisation's security desk. Anthropic is the TSA.

What this means for your AI strategy

If you are evaluating Managed Agents for an enterprise use case:

  1. Anthropic handles the "model did something catastrophically unsafe" problem
  2. You still need to handle the "agent did something that violates our policy, breaks a compliance rule, or touches something it shouldn't" problem
  3. That second problem is not going away. It is going to get harder as models get more capable and more autonomous
  4. You need an external, mechanical enforcement layer that can encode your specific rules

That is what HookBus is for.

← All posts