Lucidity manages disk lifecycles, requiring updates to the Terraform configuration to ensure seamless integration and avoid conflicts. This guide provides detailed instructions, including GCP-specific examples, to help align Terraform with Lucidity's disk management process. While the examples focus on GCP, the Terraform integration approach is adaptable and can be extended to other cloud providers, such as Azure or AWS.
We recommend involving the Lucidity support team during the Terraform integration process so we can provide any tailored recommendations to ensure a smooth, streamlined workflow.
This document is mainly split into 2 sections:
Onboard Lucidity for Existing Mount Points
Onboard Lucidity for New Mount Points:
On New Compute Instances
On Existing Compute Instances
Note
Currently, OS disks cannot be onboarded via Terraform with Lucidity. This limitation applies to both new and existing Compute Instance setups.
Onboard Lucidity To Existing Mount Points
Overview
This section of the guide details the steps to onboard Lucidity onto existing Compute Instances and disks currently managed by Terraform.
Steps To Setup Lucidity On Existing Mount Points
Update Terraform Code for Onboarding Lucidity
Adjust the Terraform (TF) code to support Lucidity’s requirements.
Install Lucidity Agent on the Compute Instances
Install the Lucidity agent directly on each Compute Instance. This can be done manually or via the Lucidity dashboard or in a scripted format based on preference.
Onboard Disks to Lucidity
Register the required disks through the Lucidity dashboard (or via APIs).
Verify Functionality
Confirm Lucidity’s functionality on the Compute Instances, ensuring that new disks are active and original disks are no longer required.
Remove Original Disks from Terraform
Once onboarded, remove references to the original disks from the Terraform configuration and remove the original disks from the infrastructure.
Terraform Code Overview
Lucidity manages two key resources in the environment, both of which need to be accounted for to avoid drift:
Resource Tag: ManagedByLucidity and MarkedFor
Disks
1. Changes to Provider Configuration
Update the provider configuration with ignore_changes to avoid conflicts between Lucidity's tags and Terraform's state management. This tells Terraform to ignore the ManagedByLucidity tag and MarkedFor tag applied by Lucidity.
provider "google" {
project = "your_project_id"
}
resource "google_compute_instance" "example" {
name = "example-instance"
machine_type = "n1-standard-1"
zone = "us-central1-a"
tags = ["ManagedByLucidity", "MarkedFor"]
}2. Lifecycle Management
Add the ignore_changes attribute to the lifecycle block in the Compute Instance resource. This instructs Terraform to disregard any changes to tags that are being managed by Lucidity. Add this block to the disk resource or the Compute Instance block where the disk is defined.
lifecycle {
ignore_changes = [tags]
}Note
Adjustments to this code block may vary based on how the Terraform code is structured.
3. Removing Original Disks
After onboarding with Lucidity, retain the original disks temporarily as a backup. Eventually, remove them from the Terraform code directly to prevent drifts in infrastructure configuration. Consider the following when updating the code:
How disks and Compute Instances are defined (together or separately managed)
Usage of modules and variable files
Any hardcoded disk references in the Terraform code
Code Walkthrough
To provide practical insight into the required Terraform code changes for Lucidity onboarding, here are a few common scenarios:
Scenario 1: Managed Disks Attached to Compute Instances
In this scenario, we will outline the process for setting up a Compute Instance with attached Managed Disks using Terraform, and demonstrate how this setup is modified after integrating Lucidity to enhance infrastructure management.
Before Lucidity Integration
Initially, the Compute Instance is configured with Managed Disks, with the basic setup ensuring disks are attached directly to the Compute Instance and tagged accordingly. The following Terraform configuration demonstrates this setup:
provider "google" {
project = "your_project_id"
}
resource "google_compute_instance" "example" {
name = "example-instance"
machine_type = "n1-standard-1"
zone = "us-central1-a"
tags= ["Environment:Production"]
boot_disk {
initialize_params {
image = "your-sample_image"
type = "pd-ssd"
}
}
attached_disk {
source = google_compute_disk.datadisk1.id
}
}
resource "google_compute_disk" "datadisk1" {
name = "example-datadisk1"
type = "pd-standard"
zone = "us-central1-a"
size = 8
}In this configuration, any change to the Compute Instance or its disks would be directly managed by Terraform, including updates to tags and volume settings.
After Lucidity Integration
After integrating Lucidity, below modifications need to be made to the Terraform configuration to delegate management of specific attributes to Lucidity, such as tags and lifecycle changes for the managed disks. This ensures that Lucidity's automated processes can manage these aspects without Terraform attempting to revert them. Here is how the configuration after the changes:
provider "google" {
project = "your_project_id"
}
resource "google_compute_instance" "example" {
name = "example-instance"
machine_type = "n1-standard-1"
zone = "us-central1-a"
tags= ["Environment:Production"]
boot_disk {
initialize_params {
image = "your-sample_image"
type = "pd-ssd"
}
}
attached_disk {
source = google_compute_disk.datadisk1.id
}
lifecycle {
ignore_changes = [
tags["MarkedFor"],
tags["ManagedByLucidity"],
]
}
}
resource "google_compute_disk" "datadisk1" {
name = "example-datadisk1"
type = "pd-standard"
zone = "us-central1-a"
size = 8
}Key Changes
Note, the key change here is the Lifecycle management change using ignore_changes attribute which tells Terraform to ignore any changes to the tags managed by Lucidity.
Note
All disks on the Compute Instance will get ignored by Terraform as per the example in the code. If all disks are not onboarded, this needs to be noted.
Scenario 2: Externally Managed Disks
This scenario details configuring managed disks that are not directly attached during Compute Instance creation. Instead, they are managed separately, allowing flexibility for detaching or reattaching disks without affecting the Compute Instance lifecycle. After Lucidity integration, additional configuration is required to facilitate Lucidity's disk management.
Before Lucidity Integration
Initially, managed disks are defined as separate resources and attached to Compute Instances through attachment specifications in Terraform. Here's how you might define this in your Terraform script before integrating with Lucidity:
provider "google" {
project = "your_project_id"
}
resource "google_compute_disk" "example_disk_1" {
name = "exampledisk1"
type = "pd-standard"
zone = "us-central1-a"
size = 100
image = "your-sample_image"
}
resource "google_compute_instance" "example" {
name = "example-instance"
machine_type = "n1-standard-1"
zone = "us-central1-a"
tags= ["Environment:Production"]
boot_disk {
initialize_params {
image = "your-sample_image"
type = "pd-ssd"
}
}
attached_disk {
source = google_compute_disk.example.disk_1.id device_name = "disk1"
mode = "READ_WRITE"
}
}After Lucidity Integration
After Lucidity is integrated, the below modifications need to be made to ensure that changes to tags and volume_tags are ignored by Terraform. This prevents Terraform from attempting to manage or revert these properties, allowing Lucidity to handle them:
provider "google" {
project = "your_project_id"
}
resource "google_compute_disk" "example_disk_1" {
name = "exampledisk1"
type = "pd-standard"
zone = "us-central1-a"
size = 100
}
resource "google_compute_instance" "example" {
name = "example-instance"
machine_type = "n1-standard-1"
zone = "us-central1-a"
tags= ["Environment:Production"]
boot_disk {
initialize_params {
image = "your-sample_image"
type = "pd-ssd"
}
}
attached_disk {
source = google_compute_disk.example.disk_1.id device_name = "disk1"
mode = "READ_WRITE"
}
}
lifecycle {
ignore_changes = [
tags["MarkedFor"],
tags["ManagedByLucidity"],
]
}State Management
Note
When managed disks and disk attachment blocks are no longer managed by Terraform, users must manually remove these resources from the Terraform configuration files. Additionally, users need to remove the corresponding state file data using Terraform commands to ensure proper state management.
terraform state rm google_compute_disk.example_disk_1
terraform state rm google_compute_instance.example.attached_disk[1]
As an alternative to manually editing the state file and Terraform configuration, users can use the following command to refresh the state based on the actual infrastructure, effectively accepting any changes made outside of Terraform after disks are removed:
terraform apply --refresh-only --auto-approve
Scenario 3: Modules and Variable Files for Disk Management
In this scenario, Terraform modules and variable files are used to create and manage disks independently of the Compute Instances. When Lucidity takes over certain aspects of disk management, adjustments are needed within these modules and variables to ensure compatibility.
Before Lucidity Integration
Initially, development teams use the modules by setting the required variables in terraform.tfvars files. An example of this setup might look like:
# In terraform.tfvars file
variable "disk_details" {
type = map(object({
size = number
type = string
}))
default = {
"disk1" = {
size = 8
type = "pd-standard"
}
}
}# In main.tf file
module "managed_disks" {
source = "./modules/managed_disks" disks = var.disk_details
}After Lucidity Integration
After integrating with Lucidity, it's important to ensure the module is adjusted and variable files are updated to reflect the changes. Assuming we are onboarding disk1 the code would look like:
# In terraform.tfvars file
variable "disk_details" {
type = map(object({
size = number
type = string
}))
default = {
"disk1" = {
size = 8
type = "pd-standard"
}
}
}
// disk1 removed# In main.tf file
module "managed_disks" {
source = "./modules/managed_disks"
disks = var.disk_details
}
# Assuming the managed_disks module defines resources like this:
resource "google_compute_disk" "example" {
for_each = var.disks
name = "ManagedDisk-${each.key}"
zone = var.zone
type = each.value.type
size = each.value.size_gb
labels = {
"ManagedByLucidity" = "true"
}
lifecycle {
ignore_changes = [
labels["MarkedFor"],
tags[ManagedByLucidity]
// Managed externally
]
}
}Key Operations
Modify the Variable File
Update terraform.tfvars to match the resources and parameters managed by Terraform.
Update the Module Configuration
If needed, add or modify lifecycle blocks within your modules to prevent Terraform from attempting to manage aspects now handled by Lucidity.
State Reconciliation
Use terraform apply --refresh-only --auto-approve to update the Terraform state if Lucidity has made changes to the resources.
These 3 scenarios illustrate how Terraform configurations adapt to Lucidity integration across various setups. Each scenario is aimed at maintaining seamless infrastructure management while leveraging Lucidity’s disk management capabilities.
Onboard Lucidity To Existing Mount Points
Overview
This section of the guide outlines the steps to onboard Lucidity onto new Compute Instances being deployed to the infrastructure using TerraForm or new partitions being added to existing Compute Instances using TerraForm. Lucidity handles the disk management on its end and hence, requires updates to the Terraform configuration to prevent conflicts and manage disk lifecycles.
The process involves configuring the Terraform script to include necessary parameters and making an API call to Lucidity’s dashboard backend. This results in the creation of a new partition from an existing disk pool, which Lucidity then manages.
Steps To Setup Lucidity On Existing Mount Points
1. New Compute Instance
Create a new Compute Instance using Terraform
Create a new Compute Instance using terraform however do NOT add the disks you want to onboard to Lucidity yet.
Install Lucidity Agent
Install the Lucidity agent on the Compute Instance. This can be done manually or via the Lucidity dashboard or in a scripted format based on preference.
Modify Terraform Code to support Lucidity
Adjust the Terraform code to support Lucidity.
Onboard Disks via Terraform
Add the new partition using Terraform as described by the next Code Walkthrough section.
Verify Functionality
As a check, verify that the disk has been onboarded via the dashboard and by logging into the Compute Instance
2. Existing Compute Instance
If the Compute Instance already has disks onboarded to Lucidity and you are looking to add more disks to the same Compute Instance:
Steps 1 - 3
Steps 1 to 3 (from the previous section) should already be completed since Compute Instance has already been onboarded however we recommend verifying the same.
Onboard Disks via Terraform
Adjust the Terraform code to add the new partition as described by the next section.
Ensure disk configurations match existing Lucidity disks
When defining the disk, the new disk host cache setting, disk type and tags have to be the same as existing Lucidity disks.
Verify Functionality
As a check, verify that the disk has been onboarded via the dashboard and by logging into the Compute Instance.
Terraform Code Overview
Ensure that the Provider Block has been updated to support onboarding Lucidity (as defined in previous sections).
Ensure that the Lifecycle Block has been updated to support onboarding Lucidity (as defined in previous sections).
Since no original disks need to be removed, no changes needed to the Terraform code for existing disks.
Users only need to modify their existing Terraform scripts to automate the partition creation and onboarding process for the new disk to be added. The script will include details such as the partition name, instance ID, disk type, and GCP-specific settings like disk interface options and replication type.
Here is a sample Terraform script tailored for GCP (Windows):
resource "null_resource" "create_new_mount_instance_gcp_windows" {
provisioner "local-exec" {
on_failure = "fail"
interpreter = ["PowerShell", "-Command"]
command = <<EOT
$uri = "http://<dashboardurl>/api/v1/partition/create"
$headers = @{
"Authorization" = "secretkey"
"X-Authtype" = "auth_key"
"X-Tenant" = "<tenantId>"
"accept" = "*/*"
"Content-Type" = "application/json"
"access-id" = "accesskey"
}
$body = @{
"diskType" = "pd-standard"
"instance" = "<instanceid>"
"partition" = "E"
"tenant" = "<tenantId>"
"encryptionKey" = @{
"kmsKeyName" = , "projects/<project>/locations/<location>/keyRings/<keyRing>/cryptoKeys/<key>"
}
} | ConvertTo-Json -Depth 10
Invoke-RestMethod -Uri $uri -Method Post -Headers $headers -Body $body EOT
}
triggers = {
always_run = timestamp()
}
}Here is a sample Terraform script tailored for GCP (Linux):
resource "null_resource" "create_new_mount_instance_gcp_linux" {
provisioner "local-exec" {
on_failure = "fail"
interpreter = ["/bin/bash", "-c"]
command = <<EOT
uri = "http://<dashboardurl>/api/v1/partition/create"
headers = (
-H "Authorization: secretkey"
-H "X-Authtype: auth_key"
-H "X-Tenant: <tenantId>"
-H "Accept: /"
-H "Content-Type: application/json"
-H "access-id: accesskey"
)
body=$(cat <<EOF
{
"diskType": "pd-standard",
"instance": "",
"partition": "E",
"tenant": "<tenantId>",
"encryptionKey": {
"KeyName": "projects//locations//keyRings//cryptoKeys/"
}
}
EOF
)
curl -X POST $uri "${headers[@]}" -d "$body"
EOT
}
triggers = {
always_run = timestamp()
}
}Parameters and Options
diskType
Users should specify the disk type based on their requirements. GCP options include pd-standard for cost-effective storage, pd-ssd for high performance, and pd-balanced for a balance between performance and cost.
instance, tenant, and partition
Mandatory fields to specify the exact resource being managed.
encryptionKey
This field refers to the customer-managed encryption key in Google Cloud. The kmsKeyName specifies the Google Cloud KMS key used to encrypt the disk partitions. Users must provide this if they have specific encryption requirements; otherwise, Google-managed keys will be used by default
Integration Process
Upon executing the updated Terraform script, an API call is made to Lucidity’s backend, which handles the creation and immediate integration of the new partition. This ensures that the new partition is set up for automated management actions, such as capacity adjustments based on usage.