Migrate to HCP Vault Dedicated with codified configuration
Challenge
As part of a cloud adoption strategy, you will inevitably face the need to migrate existing self-hosted infrastructure and use cases to cloud platforms.
For example, as the operator of a self-hosted acceptance testing or development Vault cluster running on the community edition, you might find the need to test use cases in a cloud-hosted Vault cluster with enterprise features.
HCP Vault Dedicated provides a hosted Vault Enterprise solution that is highly scalable and offers enterprise features, such as Performance Replication.
Solution
A popular approach to managing Vault configuration, codification is managing the lifecycle of a Vault server or cluster's configuration and state as code. Most often, this is implemented with Terraform, and the Vault Provider.
You can learn more about Vault configuration codification with Terraform in the Codify Management of Vault Using Terraform, Codify Management of Vault Enterprise Using Terraform, and Codify Management of HCP Vault Dedicated tutorials.
If you follow the principles covered in codifying Vault configuration with the Vault Provider, you can use Terraform to apply your Vault cluster configuration from a self-hosted community edition Vault server to an Vault Dedicated cluster..
Scenario introduction
In this tutorial, you will use terminal sessions and the command line to start a self-hosted dev mode Vault server and provision it with a codified configuration using Terraform and the Vault Provider.
After demonstrating the local Vault server configuration with username and password authentication and retrieval of key/value secrets using the related token policies, you will update your local environment to prepare for application of the configuration to your Vault Dedicated cluster.
You will then apply the modified Terraform configuration to the Vault Dedicated cluster. After it is applied, you will validate the configuration again in Vault Dedicated.
Prerequisites
A Linux or macOS development host you use to perform most of the tasks which make up the lab; (this lab was lasted tested on macOS 11.6.5)
HCP Vault Dedicated cluster; you can deploy the cluster with the steps in Deploy HCP Vault Dedicated with Terraform or Create a Vault Cluster on HCP.
- This tutorial uses an Vault Dedicated cluster deployed with Terraform, and the Vault Provider configuration found in the learn-manage-codified-hcp-vault-terraform repository.
- To successfully follow along with this tutorial, you must enable the public interface on your Vault Dedicated cluster. This configuration is covered in both the Deploy Vault Dedicated with Terraform tutorial and Create a Vault Cluster on HCP tutorials.
AWS S3 bucket with server side encryption enabled. (optional)
- This lab optionally uses AWS Key Management Service key type to encrypt the S3 bucket contents.
- Terraform needs
kms:Encrypt
,kms:Decrypt
, andkms:GenerateDataKey
permissions on the KMS key. Review the Terraform S3 State Storage documentation for more details. (optional)
Tip
You can use HCP Terraform instead of the AWS S3 bucket and KMS key to avoid the need for encrypted local state altogether.
Launch Terminal
This tutorial includes a free interactive command-line lab that lets you follow along on actual cloud infrastructure.
Personas
This scenario involves 2 personas:
admin persona runs Vault, applies configuration with Terraform Vault provider, edits configuration and environment, and applies configuration to Vault Dedicated.
student persona uses auth method to authenticate with Vault and retrieves secrets from a key/value secrets engine.
Policy requirements
For the purpose of this scenario, you will start a local dev mode Vault server and use the initial root token. In production, you should be more restrictive with root token usage and instead, you should create the correct minimum required ACL policies to complete the scenario tasks.
Here are the required ACL policies for the tasks performed in by the admin persona in this scenario.
Admin persona policy
Prepare scenario environment
The scenario consists of two distinct environments:
Your local host: This is where you will run the self-hosted dev mode Vault server and all Terraform CLI commands.
Your HCP account: This is where you have deployed your Vault Dedicated development tier cluster as the target of the migration of the self-hosted Vault server state. You can use Terraform to deploy the Vault cluster or do so manually through the web UI.
For ease of clean up and simplicity, create a temporary directory that will contain all required configuration for the scenario, named learn-vault-lab
, and export its path value as the environment variable HC_LEARN_LAB
.
Self-Hosted Vault server configuration
It is sufficient for the purposes of this scenario to use a Vault dev mode server to represent your self-hosted Vault.
Start a dev mode Vault server that listens on all interfaces, background the process, and store its process ID as the environment variable LEARN_VAULT_PID
and write log output to the file vault-server.log
.
Tip
You should have installed the vault
CLI binary as a prerequisite to this lab.
Export a VAULT_ADDR
environment variable to address the server.
Export a VAULT_TOKEN
environment variable to contain the initial root token value.
Check the Vault server status.
Note the Storage Type, Cluster Name, and Cluster ID values as those will be different between the self-hosted Vault server and your Vault Dedicated cluster as you'll observe later.
The Vault server is ready.
Validate that your root token is working to provide correct access to Vault.
You are ready to proceed with examining and applying the Vault provider configuration with Terraform.
Terraform configuration repository
You can find the Terraform configuration that you will use for this scenario in the learn-hcp-vault-ops GitHub repository.
Clone the repository into the scenario directory.
Change into the repository directory and examine the contents.
Terraform Vault provider configuration
Examine the current Terraform configuration.
There is a collection of Vault ACL policies and Terraform configuration present in this directory.
Let's first check out the ACL policies, beginning with the admin
policy.
The file is commented, with descriptions of each policy.
Now, examine the student-secrets
policy.
The policies are focused on allowing access to certain Key/Value secrets and a Transit secrets engine key.
Let's examine the Terraform configuration to learn about how the self-hosted Vault will be initially provisioned.
First, the main configuration.
You will notice that Terraform can use encrypted state in an Amazon S3 bucket to protect sensitive information such as static secrets. The settings for backend use partial configuration, which means that the settings specific to the bucket and KSM key are contained in a configuration file, which you will examine later.
After the Terraform backend configuration, the Vault provider block is present and preceded by a note about using environment variables instead of hard-coding any values into the configuration file itself.
Examine the ACL policy resources.
There are 2 policy resources defined here; one for the admin persona, and one for the student persona. Each resource references a separate ACL policy file.
Examine the auth method resources.
This configuration enables a username and password auth method, and defines the corresponding users for the admin persona and student persona.
Examine the secrets engines resources.
This configuration enables a Key/Value secrets engine (version 2), enables an instance of the Transit secrets engine, and creates a Transit secrets engine encryption key.
You need to override the values contained within config.s3.tfbackend
, so examine that file as well.
The values control settings for the encrypted state S3 bucket, including the bucket name, the Terraform state object key name, the AWS KMS key ID, and the AWS region.
Edit the file and set your specific values before the proceeding to the next step.
Note
Terraform will prompt you for interactive input of any values not set in the config.s3.tfbackend
file.
Initialize workspace and apply configuration
Initialize your Terraform workspace, which will download and configure the providers. For the purposes of this lab, we will use local state storage instead of HCP Terraform or configuring the encrypted S3 bucket.
Expected output:
Run terraform apply and review the planned actions. Your terminal output should indicate the plan is running and what resources will be created.
This terraform apply will provision a total of 5 resources (Username and password auth method, key-value secrets engine, etc.).
Confirm the apply with a yes
.
Successful output concludes with a line like this:
Validate configuration
You can test that the configuration is working as expected by authenticating with Vault and accessing a secret.
Before doing so, you need to unset the VAULT_TOKEN
environment variable so that it does not override the value of the token that the Vault token helper will cache to ~/.vault-token
after successful authentication.
Use the CLI and username and password (userpass) auth method to authenticate as the student user.
Vault will respond with:
Enter the student user's password: ch4ngeMe~
Successful output example:
You have received a token with the default and student-secrets policies attached. Let's check out the secrets stored at the api-credentials
path.
Interesting— we are authenticated as a student, but what access do we have for secrets under the admin
path?
The student persona ACL policy allows for list capabilities on the api-credentials/metadata/*
path, and in this case you know there is a key named api-wizard
, but can you access the contents of that secret?
No!
The ACL policy works as intended and you are unable to access admin secrets as the student persona. You can try accessing the secrets under the api-credentials/student/
path, however.
Try to access the api-key
secret.
Nicely done, you have validated some of the configuration that was applied to your self-hosted Vault server.
Now you are ready to try applying this same configuration to your HCP Vault Dedicated cluster!
HCP Vault Dedicated cluster configuration
There are surprisingly very few changes required to migrate this same configuration to Vault Dedicated.
One important difference between self-hosted Vault and Vault Dedicated is that all Vault Dedicated clusters run Vault Enterprise and use the Enterprise Namespaces feature to set a default namespace of admin
.
With this in mind, you can export the VAULT_NAMESPACE
environment variable to instruct Terraform to use that namespace.
To deploy the configuration, you need to change the values of the VAULT_ADDR
and VAULT_TOKEN
environment variables to match your Vault Dedicated cluster.
Access your Vault Dedicated cluster UI, and under Quick actions, click the Public Cluster URL.
In the terminal, set the VAULT_ADDR
environment variable to the copied address.
Return to the Vault Dedicated cluster UI, and on the Overview page, click Generate token.
Within a few moments, a new token will be generated. Copy the Admin Token.
In the terminal, set the VAULT_TOKEN
environment variable to the copied token value.
These changes to the environment are all that is required to apply the configuration to the Vault Dedicated cluster.
Deploy the updated Terraform Vault provider configuration to Vault Dedicated.
Confirm the apply with a yes
.
Successful output concludes with a line like this:
Validate HCP Vault Dedicated configuration
Check Vault status:
The output is quite different from that of the self-hosted Vault server. The Version is Enterprise (as indicated by the +ent
), the storage type is now Raft, and the Cluster Name, Cluster ID, and HA CLuster values are completely different.
As a final brief check, unset the VAULT_TOKEN value.
Then attempt to authenticate to Vault with the userpass auth method as the admin persona.
Vault will respond with:
Enter the student user's password: superS3cret!
Successful output example:
If you recall, the student persona had access to a secret at api-credentials/student/golden
. Try to get this secret's contents.
Try to get the secret again, but this time perform a Base64 decode operation on the egg-b64
field.
A surprise is revealed!
You have validated the admin persona is now able to authenticate with your Vault Dedicated cluster using the previously configured password and username and password auth method, which were applied to your new Vault Dedicated.
You can now use the Vault API, CLI, or Web UI to access all of the configuration previously available in your self-hosted Vault in Vault Dedicated.
Have fun exploring your new Vault Dedicated!
Important caveats and considerations
You should also be aware that there are some specific caveats and considerations around migration of any self-hosted Vault to Vault Dedicated, which also affect the approach shown in this scenario.
Reminder
The hands on lab in this tutorial is a simple scenario appropriate to migration of non-production Vault installations, but it is not suitable for use in production as-is.
Enterprise Vault with a default namespace
Keep in mind that all Vault Dedicated clusters operate the Vault Enterprise edition, which features the Enterprise Namespace feature. As such, all Vault Dedicated clusters use a default namespace named admin.
Seal is not migrated
In this scenario, you started with a new self-hosted dev mode Vault server, and applied codified Vault configuration to it with the Vault Provider and Terraform. After some changes to your local environment, you then applied this same configuration to an Vault Dedicated cluster.
At no time did this process migrate the actual Vault seal
Storage type could change
Be aware that all Vault Dedicated clusters use the Integrated Storage (raft) backend. A dev mode Vault server uses the in-memory (inmem) backend, so the storage backend actually changes during the configuration migration you walked through in this lab.
If you are not already using the Integrated Storage backend, and you need to preserve your current storage backend type, then this migration approach will not work for you.
Audit devices
Vault Dedicated has a constrained audit device functionality You cannot currently migrate a self-hosted Vault audit device to Vault Dedicated.
Public IP address
In this lab you learned about migrating the state of a self-hosted development Vault server. As this scenario did not deal with production, you could simply enable the public interface on your Vault Dedicated cluster.
This solution is not acceptable for consideration with production clusters. While solutions involving bastion hosts within the peered VPC with your Vault cluster HVN could provide the means to migrate without enabling a public interface, this is not covered in the lab scenario.
Feature parity: ADP Module
The feature parity between self-hosted Vault and Vault Dedicated grows closer with each iteration, but is not yet 100%.
Features which require the ADP Module, such as the Transform secrets engine are not yet available in Vault Dedicated, so keep this in mind when migrating as well.
Summary
You learned how to migrate a codified Vault configuration from a self-hosted installation to an Vault Dedicated cluster using Terraform.
While this is just one of the approaches which are detailed in the Migration Strategies and Considerations Guidelines documentation, it is a quick and straightforward solution for certain use cases, such as Development or Quality Assurance Vault instances.
You also learned about some of the caveats and limitations involved in migration to Vault Dedicated using the strategy explained in this lab.
Clean up
Here are the steps you need to completely clean up the scenario content from your local environment.
Stop the Vault server process.
Change back to your $HOME
directory.
Recursively remove the Learn lab directory.
Unset the environment variables.
Finally, delete the Vault Dedicated cluster instance and HVN as necessary using the web UI or terraform.