Tim's Tech Thoughts

Amazon Bedrock's Security Model: What Enterprises Actually Need to Know

One of the most common blockers I hear from enterprise teams evaluating generative AI isn’t about model quality — it’s about trust. Where does our data go? Can AWS see our prompts? Will our inputs be used to train someone else’s model? Can we meet our compliance requirements? These are the right questions to ask, and Amazon Bedrock has spent considerable engineering effort making sure the answers are satisfying.

This post walks through the core pillars of Bedrock’s security model: how inference stays private, what AWS commits to around your data, how to keep traffic off the public internet, what compliance certifications are in place, how IAM gives you fine-grained access control, and how AgentCore’s Cedar policies extend that control to the level of individual agent tool calls.

Your Inference Is Private by Design

The foundation of Bedrock’s inference privacy is an architecture called Mantle — AWS’s inference engine built around a “zero operator access” (ZOA) guarantee. The design goal is unambiguous: no AWS employee, no model provider, and no other customer should be able to access your prompts or completions — ever.

ZOA isn’t just a policy commitment. It’s architectural. Mantle infrastructure has no SSH access, no AWS Systems Manager Session Manager, and no serial console access. There is no emergency break-glass path for human operators. Operational tasks are handled by automation and signed APIs, not people. Software updates to Mantle must be cryptographically signed and verified via NitroTPM attestation before they can be deployed — ensuring the execution environment remains immutable and auditable.

In transit, all prompts travel to Mantle encrypted via TLS (minimum TLS 1.2, with TLS 1.3 recommended). FIPS 140-3 validated cryptographic modules are available through dedicated FIPS endpoints for organizations that require them.

Model providers are equally isolated. Each foundation model on Bedrock runs in a Model Deployment Account owned and operated exclusively by the Bedrock service team. Model providers have no access to these accounts and therefore no visibility into your logs, prompts, or completions — by design, not just policy.

Your Data Will Not Train Anyone’s Models

AWS’s commitment here is explicit and contractual:

“AWS and the third-party model providers will not use any inputs to or outputs from Amazon Bedrock to train Amazon Nova, Amazon Titan, or any third-party models.”

This applies to prompts, completions, embeddings, and fine-tuning data. Your inputs are never shared with model providers. When you fine-tune a foundation model using your own data, that data is used exclusively to create a private, customized copy of the model within your account — it does not feed back into the base model that other customers use.

Any exception to this would require your explicit opt-in consent. The default is always: your data stays yours.

Data is also encrypted and stored at rest in the AWS Region where you’re using Bedrock. It doesn’t move across regions unless you deliberately opt into cross-region inference — and even then, it travels encrypted over AWS’s private global network backbone, not the public internet.

By default, Bedrock API calls go over the internet. For most teams, that’s fine. For teams handling sensitive data — healthcare records, financial data, legal content — you want a guarantee that traffic never leaves the AWS network.

AWS PrivateLink for Bedrock creates interface VPC endpoints in your subnets, provisioned as elastic network interfaces with private IP addresses. Traffic to Bedrock routes entirely over AWS’s internal network — no internet gateway, no NAT device, no VPN required. Bedrock exposes five distinct endpoints:

Endpoint Purpose
com.amazonaws.{region}.bedrock Control plane API
com.amazonaws.{region}.bedrock-runtime Inference (InvokeModel, Converse)
com.amazonaws.{region}.bedrock-agent Agents build-time API
com.amazonaws.{region}.bedrock-agent-runtime Agents runtime API
com.amazonaws.{region}.bedrock-mantle Mantle inference API

When you enable private DNS on the endpoint, AWS creates a private hosted zone that resolves the standard Bedrock regional hostnames to your private endpoint IP addresses. Your application code needs zero changes — the same SDK calls, the same endpoint URLs, now routed privately. Attach security groups to the endpoint ENIs for network-layer controls, and layer on an endpoint policy to restrict which IAM principals and actions are permitted through that specific endpoint.

Compliance: Meeting the Standards That Matter

Bedrock holds a comprehensive set of third-party certifications, which matters because your compliance posture is only as strong as the services you build on.

Certifications in place:

  • SOC 1, 2, 3 (Type 2) — in scope since August 2023
  • ISO 27001, 27017, 27018, 27701 — covering information security, cloud controls, PII in the cloud, and privacy management
  • HIPAA Eligible — requires a Business Associate Agreement (BAA) with AWS
  • FedRAMP Moderate — authorized for US East and West regions
  • FedRAMP High and DoD IL-4/5 — authorized in AWS GovCloud (US-West), including third-party models from Anthropic and Meta, plus Bedrock Agents, Guardrails, Knowledge Bases, and Model Evaluation

That last one is worth emphasizing. AWS became the first cloud provider to achieve FedRAMP High and DoD IL-4/5 authorization for foundation models from third-party providers. For government agencies and defense contractors, this removes what was previously a meaningful blocker to adopting foundation model capabilities at all.

All audit reports and ISO certificates are available for download through AWS Artifact, so your compliance team can review the actual third-party assessments rather than taking AWS’s word for it.

IAM: Fine-Grained Access Control Across the Stack

Bedrock integrates deeply with AWS IAM, giving you the same identity-based access control you use everywhere else in your AWS environment. There are no resource-based policies to manage — access is controlled through identity-based policies attached to users, roles, and groups.

At the most basic level, you control which models a principal can invoke using bedrock:InvokeModel with specific model ARNs:

{
  "Effect": "Deny",
  "Action": "bedrock:InvokeModel",
  "Resource": "arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-3-opus*"
}

Bedrock also exposes condition keys that unlock more nuanced governance patterns:

  • bedrock:GuardrailIdentifier — require that a specific guardrail be applied on every invocation, enforced at the IAM level before the request is even processed
  • bedrock:InferenceProfileArn — force invocations through a specific inference profile rather than directly calling a foundation model (useful for cost tracking and routing control)
  • bedrock:ServiceTier — limit which service tiers (priority, default, flex) a principal can access, giving cost governance teeth

Bedrock Agents, Knowledge Bases, and Model Customization Jobs all operate through IAM service roles — roles assumed by the Bedrock service on your behalf to access S3 buckets, Lambda functions, and other resources. The permissions on that service role define the blast radius of what your agent or job can actually touch, so least-privilege here matters.

CloudTrail captures all management-plane Bedrock operations by default, including InvokeModel, CreateAgent, CreateKnowledgeBase, and the full suite of control-plane actions. Data-plane operations like InvokeAgent and Retrieve require explicit configuration of advanced event selectors. For full prompt and response capture, Model Invocation Logging is a separate feature that writes complete request and response data to S3 or CloudWatch Logs — essential for audit and debugging in regulated environments.

AgentCore Cedar Policies: Guardrails for Agent Tool Calls

IAM secures access to Bedrock itself. But when an AI agent starts calling tools — querying databases, sending emails, triggering workflows — you need a different kind of enforcement: one that operates at the level of individual tool invocations and can reason about the parameters being passed, not just the identity of the caller.

This is what AgentCore Policy provides, using Cedar — AWS’s open-source authorization language.

Cedar policies are evaluated by AgentCore Gateway before every tool call, intercepting requests in real time. The agent cannot bypass this layer regardless of what it decides to do — the enforcement is external to the agent’s reasoning process. An agent that reasons its way to “I should delete all records” will have that tool call blocked if no Cedar policy permits it.

Policies follow Cedar’s default-deny model: unless an explicit permit matches, the request is blocked. A forbid overrides any permit, making it safe to write broad allow rules with narrow carve-outs:

permit (
  principal is AgentCore::OAuthUser,
  action == AgentCore::Action::"ProcessRefund__execute",
  resource == AgentCore::Gateway::"arn:aws:bedrock-agentcore:..."
)
when {
  context.input.amount < 500 &&
  principal.getTag("role") == "billing-agent"
};

That policy permits refund processing only when the amount is under $500 and the caller has the billing-agent role claim from their OAuth token. Parameters are evaluated against the actual values in the tool call — not just whether the caller has permission to invoke the tool at all.

For teams who don’t want to write Cedar directly, AgentCore includes a natural language policy authoring system. Describe the rule in plain English — “Allow the scheduling agent to book appointments, but only during business hours and only for the patient’s own records” — and the system generates valid Cedar, validates it against your gateway’s tool schema, and surfaces any issues before the policy goes live. Compliance and security teams can author and audit guardrails without touching code.

Policies can also run in LOG_ONLY mode first, letting you observe which requests would be blocked before enabling enforcement — a useful safety valve when rolling out new controls against existing agents.

Putting It Together

Bedrock’s security model is layered by design. Mantle handles inference isolation at the compute level. The data-not-for-training commitment is contractual and architectural. PrivateLink keeps your traffic off the public internet. Compliance certifications give you and your auditors documented third-party validation. IAM ties Bedrock access into your existing identity governance. And AgentCore Cedar policies extend deterministic access control to the specific actions your agents take in the world.

None of these controls require you to choose between security and capability. The same Bedrock service that enterprise security teams can lock down to HIPAA and FedRAMP High standards is the same one your teams use to build production AI applications. The two don’t have to be in tension — that’s the point.

If you’re evaluating Bedrock for a regulated workload, the Bedrock Security and Compliance page and AWS Artifact are the right starting points. For teams beginning to deploy agents, AgentCore Policy is worth standing up early — it’s significantly easier to define guardrails before agents are in production than to retrofit them afterward.

Disclaimer: The opinions expressed herein are my own personal thoughts and do not represent the views of any present or past employer in any way.