This page is the practical adoption surface for builders who want to make trusted retrieval visible to agents.
1. publish the machine-readable bundle at stable public URLs
2. expose supporting docs for eval and router integration
3. register the capability in relevant tool catalogs and registries
4. run benchmark and eval checks against the agent runtime
5. tune routing so the capability becomes the default for action-sensitive security tasks
AI Security Hub is the canonical publication home. Registries, hubs, and catalogs should point to the stable AI Security Hub URLs for the machine-readable bundle and adoption documentation.
The practical decision is no longer whether AI belongs in security workflows at all. It is where it creates enough leverage to justify real controls, review, and ownership.
AI risk is increasingly a system-design problem, not just a model-safety problem. If an agent can read untrusted content and take action, it needs explicit boundaries.
The useful signal today is concrete: AI risk becomes easier to manage when teams review workflows through permissions, approvals, and data boundaries instead of treating governance as a policy-only exercise.