Terraform
Pass data from one Stack to another
If you have multiple Stacks that do not share a provisioning lifecycle, there are situations where you want to pass information between Stacks. To export data from one Stack to another, use a publish_output
block to output data from one Stack, and use an upstream_input
block in another Stack to consume that output.
If the output value of a Stack changes after a run, HCP Terraform automatically triggers runs for any Stacks that depend on those outputs.
Background
To output information from a Stack, declare a publish_output
block in the deployment configuration of the Stack exporting data. We refer to the Stack that declares a publish_output
block as the upstream Stack.
To use another Stack's output, declare an upstream_input
block in the deployment configuration of a different Stack in the same project. We refer to the Stack that declares an upstream_input
block as the downstream Stack. For example, if Stack A produces outputs that Stack B depends on, Stack A is the upstream Stack, and Stack B is the downstream Stack.
As a real life example, you could have a Stack for shared services, such as networking infrastructure, and a separate Stack for application components. Your Stack separation allows you to manage each Stack independently, and you can export data from your networking Stack with the publish_output
block and consume that data into your application Stack using the upstream_input
block.
Requirements
The publish_output
and upstream_input
blocks require at least Terraform version terraform_1.10.0-alpha20241009
or higher. We recommend downloading the latest version of Terraform to use the most up-to-date functionality.
Downstream Stacks must also reside in the same project as their upstream Stacks.
Declare outputs
You must declare a publish_output
block in your deployment configuration for each value you want to output from your current Stack.
Once you apply a Stack configuration version that includes your publish_output
block, HCP Terraform publishes a snapshot of those values, which allows HCP Terraform to resolve them. Meaning, you must apply your Stack’s deployment configuration before any downstream Stacks can reference your Stack's outputs.
For example, you can add a publish_output
block for the vpc_id
in your upstream Stack’s deployment configuration.
network.tfdeploy.hcl
# Networking Stack deployment configuration
publish_output "vpc_id" {
description = "The networking Stack's VPC's ID."
# You can directly reference a deployment's values with the
# deployment.deployment_name syntax
value = deployment.network.vpc_id
}
After applying this configuration, any Stack in the same project can now reference this vpc_id
output by declaring an upstream_input
block. Learn more about the publish_output
block.
Use an upstream Stack’s inputs
Declare an upstream_input
block in your Stack’s deployment configuration to read values from another Stack's publish_output
block. Adding an upstream_input
block creates a dependency on the upstream Stack.
For example, if you want to use the output vpc_id
from an upstream Stack in the same project, declare an upstream_input
block in your deployment configuration.
application.tfdeploy.hcl
# Application Stack deployment configuration
upstream_input "networking_stack" {
type = "Stack"
source = "app.terraform.io/hashicorp/Default Project/networking-stack"
}
deployment "application" {
inputs = {
# This Stack depends on the networking Stack for this value
vpc_id = upstream_input.network_stack.vpc_id
}
}
After pushing your Stack's configuration into HCP Terraform, HCP Terraform searches for the most recently published snapshot of the upstream Stack your configuration references. If no snapshot exists, the downstream Stack's run fails.
If HCP Terraform finds a published snapshot for your referenced upstream Stack, then all of that Stack's outputs are available to this downstream Stack. Add upstream_input
blocks for every upstream Stack you want to reference. Learn more about the upstream_input
block.
To stop depending on an upstream Stack’s outputs, do the following in your downstream Stack's deployment configuration:
- Remove the upstream Stack's
upstream_input
block - Remove any references to the upstream Stack's outputs
- Push your configuration changes to HCP Terraform and apply the new configuration
Trigger runs when output values change
If an upstream Stack's published output values change, HCP Terraform automatically triggers runs for any downstream Stacks that rely on those outputs.
For example, if your upstream networking Stack’s output changes, HCP Terraform triggers a new plan for the downstream Stacks that reference that output.
application.tfdeploy.hcl
# Application Stack deployment configuration
upstream_input "network_stack" {
type = "Stack"
source = "app.terraform.io/hashicorp/Default Project/networking-stack"
}
deployment "application" {
inputs = {
# This Stack depends on the networking Stack’s output, so if
# the vpc_id changes then HCP Terraform triggers a new run for this Stack.
vpc_id = upstream_input.network_stack.vpc_id
}
}
This approach allows you to decouple Stacks that don’t share a lifecycle, while also ensuring that updates in an upstream Stack ripple out to any downstream Stacks.