HCP Terraform — Full Review · infraplz.dev · infraplz.dev
ReviewIBM (HashiCorp)
HCP Terraform
Managed control plane for HashiCorp Terraform: remote state, run execution, and policy enforcement. Billed by Resources Under Management (RUM) since November 2023. Operated by IBM since the HashiCorp acquisition closed in late 2024.
HCP Terraform is the managed control plane for HashiCorp Terraform, operated by IBM since the HashiCorp acquisition closed in late 2024. It was rebranded from "Terraform Cloud" in April 2024 and sits at the same URL — app.terraform.io — but the rebrand coincided with a structural shift in how the product is sold, governed, and priced.
The product centralizes three concerns that Terraform itself leaves to the operator: remote state storage, run execution, and policy enforcement. A workspace in HCP Terraform represents one Terraform configuration with its own state, variables, run history, and locking semantics. Runs are triggered by VCS webhooks, the CLI, the API, or upstream workspaces via run triggers. State is stored encrypted at rest, locked during plans and applies, and exposed to other workspaces through the terraform_remote_state data source.
Three external developments shape any current evaluation. First, the November 2023 shift from per-seat to Resources Under Management (RUM) billing reset cost predictability for every customer above a small team. Second, the IBM acquisition has reordered the roadmap: CDKTF was deprecated in December 2025, several CLI subcommands were silently moved behind higher tiers in January 2025, and the entered public preview in May 2026. Third, OpenTofu — the community fork — shipped native client-side state encryption in version 1.7, a capability HCP Terraform's SaaS tiers do not match.
Sentinel (native), OPA (Agent 1.28.0+, January 2026)
State encryption
At-rest (server-side); client-side not supported
SSO
SAML on all tiers including Free
Audit logs
Standard+
Compliance
SOC 2 Type II, ISO 27001:2022, ISO 27017, ISO 27018
FedRAMP
Not publicly listed
HIPAA
Not publicly listed
OpenTofu support
Not officially supported
Open source
No
Pricing source
hashicorp.com/en/pricing
3. What It Does
HCP Terraform executes Terraform runs against a workspace's configuration and stores the resulting state file. Each workspace pins a Terraform version, holds environment and Terraform variables (with optional sensitive and — since Terraform 1.10 — ephemeral flags), and records a linear history of plans and applies.
Runs originate from one of four triggers: a VCS push event (GitHub, GitLab, Bitbucket, Azure DevOps), a terraform plan/apply invocation against the remote backend, an API call, or a run trigger fired by an upstream workspace's successful apply. Speculative plans run on pull requests and post results to the VCS provider as a check.
Policy evaluation runs between plan and apply. Sentinel is the native engine; OPA support landed in Agent 1.28.0 (January 2026) but is structurally decoupled from the workspace RBAC model. Standard tier permits 5 total policies with only 1 enforceable, which constrains its use for compliance gating. Premium and Enterprise lift the limits.
State sharing across workspaces is read-through terraform_remote_state, scoped by workspace allowlist. Stacks — HashiCorp's multi-component deployment construct — reached GA in early 2026 and addresses fan-out across environments and regions, but requires migration from the legacy workspace model.
4. Platforms and Integration
VCS: GitHub.com and GitHub Enterprise, GitLab.com and self-hosted, Bitbucket Cloud and Data Center, Azure DevOps Services and Server. Webhook ingress and OAuth-based repository selection are standard across providers.
CLI: The terraform binary integrates via the cloud {} block in terraform { } configuration, replacing the older remote backend. Workspaces are addressed by name or tag.
Agents: Self-hosted runners (HCP Terraform Agents) execute runs inside private networks. Required for any workspace whose providers must reach internal endpoints — VPC-only RDS, private GKE, on-prem vSphere. Agent pools are a Premium-tier feature.
Identity: SAML SSO on all tiers including Free. SCIM provisioning on Premium. OIDC dynamic credentials are first-class for AWS, Azure, GCP, and Vault — workspaces can mint short-lived cloud credentials at run time without storing static keys.
APIs: REST API (v2) and a tfe Terraform provider for managing HCP Terraform itself. The tfe provider is how most teams codify workspace, variable set, team, and policy set lifecycle.
CDKTF: Deprecated December 2025. Existing TypeScript, Python, Java, C#, and Go pipeline code requires migration to HCL or alternate tooling before adoption.
5. In Practice
RUM count drift
A workspace's RUM count equals the number of managed resources in its state file at peak hour. null_resource, terraform_data, data sources, and locals do not count. Helm charts, Kubernetes operators, and modules wrapping cloud-native primitives do — and they multiply.
A single helm_release for a service mesh can land 60–120 child resources in state. A Crossplane-style Kubernetes operator pattern produces resource counts 30–50% higher than initial estimates. Teams budgeting against the resource count of their HCL repository routinely under-forecast actual RUM by a factor of two.
Concurrency failure cascades
Three failure modes recur on Standard (3 concurrent runs) and Premium (10 concurrent runs):
CI/CD timeout starvation. Five simultaneous merges to a Standard-tier organization queue all five runs behind the 3-slot limit. GitHub Actions' default 15-minute job timeout fires before HCP Terraform begins execution on the queued runs. The pipeline reports failure; the runs eventually complete in HCP Terraform's UI, leaving CI green-red mismatches.
State lock freeze. A terraform apply -target=... that hangs holds the workspace lock indefinitely. Subsequent runs queue. The queue grows until the API rejects new webhook triggers. Manual lock release through the UI requires admin permissions on the workspace.
Speculative plan consumption. Pull request bursts run speculative plans against the same concurrency pool as production applies. On Premium with 10 slots, a 12-PR burst can block a production hotfix until speculative plans drain.
Workspace sprawl
HCP Terraform offers no native lifecycle management for workspaces, no scanning of cloud accounts for unmanaged resources, and no automated discovery of drift across an organization's footprint. Above 50 workspaces, inventory and cleanup become an out-of-band operational burden — typically handled via custom tooling against the tfe provider or third-party platforms.
Monorepo behavior
A single commit to a monorepo triggers speculative plans on every workspace tied to that repository, irrespective of which paths changed. Path filtering exists but is configured per-workspace and frequently misaligned with directory layout. Stacks (GA early 2026) addresses this with a single component graph per repository, but requires a migration from existing per-component workspaces.
ignore_changes on engine_version lets RDS auto-minor-upgrades happen out-of-band without Terraform fighting them on the next plan. prevent_destroy blocks accidental terraform destroy. create_before_destroy matters for resources fronted by an ASG or load balancer where deletion-then-creation would cause an outage.
The network-prod-us-east-1 workspace must explicitly allow app-prod-us-east-1 in its remote state sharing settings, or the data source returns an authorization error at plan time.
ephemeral = true (Terraform 1.10+) keeps the value out of state and plan files. The constraint: ephemeral values cannot feed output blocks, cannot be referenced from lifecycle blocks, and cannot be assigned to non-ephemeral resource arguments that get persisted.
6. Pricing
How RUM is counted
Billing is computed hourly, by peak count of managed resources in state, summed across workspaces in the organization. A resource present at any point during a clock hour counts for that full hour. Resources that do not count: null_resource, terraform_data, all data blocks, locals, modules themselves (only their resource contents), and outputs.
The ephemeral environment trap
A PR-driven preview environment that creates 80 resources at 14:05 and destroys them at 14:50 incurs a full hour of billing on those 80 resources. A team running 200 PR previews per day, each lasting under an hour, can pay for resources whose median lifetime is 25 minutes as if they ran continuously.
Team-size cost table
Assuming average managed-resource counts that scale with team size — roughly 75 resources per engineer at the median across surveyed Terraform shops:
Engineers
Resources
Standard ($0.47/mo)
Premium ($0.99/mo)
5
~375
$176
$371
12
~1,200
$564
$1,188
25
~3,125
$1,469
$3,094
50
~7,500
$3,525
$7,425
Self-managed remote state (S3 + DynamoDB) at the same scale: $0.01–$0.09/month in AWS infrastructure cost.
Break-even framing
Self-managed Terraform remote state requires bucket setup, lock table provisioning, IAM hardening, and ongoing maintenance. Estimated maintenance time:
Engineers
Self-managed maintenance
At $150/hr fully loaded
HCP Standard
5
2 hrs/mo
$300
$176
12
5 hrs/mo
$750
$564
25
10 hrs/mo
$1,500
$1,469
50
20 hrs/mo
$3,000
$3,525
On total cost, self-managed is cheaper than HCP Terraform Standard at every team size above 5 engineers. The HCP value proposition is governance, audit, and risk reduction — not infrastructure cost savings.
The perverse incentive
Because RUM is computed from state, teams have an incentive to suppress resource granularity. Observed patterns: collapsing per-environment workspaces into one workspace with count loops; replacing fine-grained aws_iam_role_policy resources with monolithic inline policies; avoiding aws_route53_record per-record patterns in favor of bulk JSON; declining to adopt modules that wrap cloud primitives at native granularity. Billing actively degrades architecture quality.
Stealth paywalling
In January 2025, terraform import and several advanced state manipulation subcommands moved behind higher tiers without a documentation changelog entry. CI/CD pipelines depending on terraform state rm and terraform import in import-and-rationalize workflows broke without warning on Free and Essentials tier organizations.
7. Requirements
Requirement
Detail
Terraform version
1.1+ (1.10+ for ephemeral variables; 1.6+ for testing framework)
VCS account
GitHub, GitLab, Bitbucket, or Azure DevOps for VCS-driven runs
Cloud credentials
Static (Variable Sets) or dynamic via OIDC (recommended)
Network
Public internet egress to app.terraform.io; HCP Agents for private-network targets
SAML IdP
Required for SSO on Standard+; Okta, Azure AD, Google Workspace, generic SAML 2.0
Browser
Modern evergreen browser for the UI
CDKTF
Deprecated December 2025 — net-new TypeScript/Python pipelines should not adopt
8. Security and Data
The sensitive = true myth
sensitive = true on a variable or output suppresses CLI rendering only. It does not encrypt the value, does not omit it from state, and does not prevent any user with workspace state read permission from pulling the raw JSON. The state file stores the value in plaintext.
High-risk resource types
These resources persist secrets in plaintext within state regardless of sensitive flagging:
Resource
Field
aws_db_instance
password
random_password
result
tls_private_key
private_key_pem
kubernetes_secret
data
aws_iam_access_key
secret, ses_smtp_password_v4
aws_secretsmanager_secret_version
secret_string
google_sql_user
password
What HCP Terraform provides
TLS 1.2+ in transit
AES-256 encryption at rest, with HashiCorp-managed keys on SaaS tiers
Workspace-scoped RBAC and team permissions
Audit logs (Standard tier and above)
SAML SSO (all tiers including Free), SCIM (Premium+)
Customer-controlled encryption keys (BYOK / HYOK) on SaaS tiers
HSM-backed key custody
Ephemeral variables
Terraform 1.10+ ephemeral = true variables are not written to state or plan files. They cannot populate output blocks, cannot be referenced from lifecycle blocks, and cannot be persisted into resource arguments that survive the run. The capability addresses session credentials and short-lived tokens; it does not address resource attributes like aws_db_instance.password that are intrinsically persisted.
OpenTofu differentiator
OpenTofu 1.7 and later ship native client-side state encryption using AES-GCM with KMS integration for AWS KMS, GCP KMS, and PBKDF2 passphrase. State is encrypted before being written to any backend, including S3. HCP Terraform's SaaS tiers do not provide an equivalent — the closest match is Terraform Enterprise (self-hosted) with customer-managed Vault integration. For HIPAA, PCI-DSS, or SOC 2 environments where state must be encrypted with customer-controlled keys, this is a structural gap.
Compliance certifications
SOC 2 Type II, ISO 27001:2022, ISO 27017, ISO 27018. FedRAMP and HIPAA are not publicly listed as of the review date.
Spacelift is the only IaC platform with FedRAMP certification as of 2025, and reports approximately 50% of customer deployments running on OpenTofu.
env0 bills per-apply rather than per-resource, which avoids RUM spikes for dynamic and ephemeral infrastructure (PR previews, short-lived sandboxes).
OpenTofu teams have first-class support on Scalr (free OpenTofu runs) and Spacelift. HCP Terraform does not officially support OpenTofu.
Atlantis is $0 and self-hosted with PR-driven workflow. It has no native RBAC, no management UI, and no policy enforcement layer — operating it requires platform engineering capacity.