This document will walk you through the process of integrating Lucidity's block storage AutoScaler with your existing Terraform-managed infrastructure.
Prerequisites
Before beginning the integration process, ensure you have the following:
Necessary permissions to modify Terraform files and execute configurations.
Administrative access to your cloud environment.
Integration Overview
Integrating Lucidity with your Terraform setup allows you to manage your block storage more efficiently, offering features such as automated scalability depending on the disk utilization and enhanced monitoring. This document will cover multiple scenarios, including management of managed disks both inside and outside of VMs.
Scenario 1: Integrating Lucidity with Managed Disks Attached to VMs
Overview
In this scenario, we detail how to set up an Azure VM with attached Managed Disks using Terraform and how this setup is modified once Lucidity is integrated to manage the infrastructure more effectively.
Before Lucidity Integration
Initially, the Azure VM is configured with Managed Disks, with the basic setup ensuring disks are attached directly to the VM and tagged accordingly. The following Terraform configuration demonstrates this setup:
resource "azurerm_linux_virtual_machine" "example" {
name = "MyExampleVM"
location = "East US"
resource_group_name = azurerm_resource_group.example.name
size = "Standard_DS1_v2"
admin_username = "adminuser"
network_interface_ids = [
azurerm_network_interface.example.id,
]
os_disk {
caching = "ReadWrite"
storage_account_type = "Premium_LRS"
}
storage_data_disk {
name = "example-datadisk1"
create_option = "Empty"
disk_size_gb = 8
lun = 0
caching = "None"
}
storage_data_disk {
name = "example-datadisk2"
create_option = "Empty"
disk_size_gb = 16
lun = 1
caching = "None"
}
tags = {
Environment = "Production"
}
}In this configuration, any change to the VM or its disks would be directly managed by Terraform, including updates to tags and volume settings.
After Lucidity Integration
After integrating Lucidity, below modifications need to be made to the Terraform configuration to delegate management of specific attributes to Lucidity, such as tags and lifecycle changes for the managed disks. This ensures that Lucidity's automated processes can manage these aspects without Terraform attempting to revert them. Here is how the configuration changes:
resource "azurerm_linux_virtual_machine" "example" {
name = "MyExampleVM"
location = "East US"
resource_group_name = azurerm_resource_group.example.name
size = "Standard_DS1_v2"
admin_username = "adminuser"
network_interface_ids = [
azurerm_network_interface.example.id,
]
os_disk {
caching = "ReadWrite"
storage_account_type = "Premium_LRS"
}
tags = {
Environment = "Production"
}
lifecycle {
ignore_changes = [
"storage_data_disk", // Lucidity manages disk lifecycle
"tags" // Tags are managed outside of Terraform
]
}
}
provider "azurerm" {
features {}
ignore_tags {
keys = ["ManagedByLucidity"]
}
}Key Changes:
Lifecycle Management: The ignore_changes attribute is added to the lifecycle block of the VM resource. This tells Terraform to ignore any changes to the managed disks and tags, which are managed by Lucidity.
Provider Configuration: The AzureRM provider configuration is enhanced with ignore_tags. This directs Terraform not to manage tags that are specified by Lucidity, preventing conflicts between manual changes and Terraform's state management.
Scenario 2: Integrating Lucidity with Externally Managed Disks
Overview
In this scenario, we'll cover how to configure external managed disks (not directly attached upon instance creation) and the necessary modifications to manage these disks effectively with Lucidity after integration. Managing disks externally allows more flexibility in storage management and can help with configurations where disks may need to be detached or reattached without affecting the instance lifecycle.
Before Lucidity Integration
Initially, managed disks are defined as separate resources and attached to VMs through attachment specifications in Terraform. Here's how you might define this in your Terraform script before integrating with Lucidity:
resource "azurerm_managed_disk" "example_disk_0" {
name = "exampleDisk0"
location = "East US"
resource_group_name = azurerm_resource_group.example.name
storage_account_type = "Standard_LRS"
create_option = "Empty"
disk_size_gb = 50
}
resource "azurerm_virtual_machine_data_disk_attachment" "example_attach_0" {
managed_disk_id = azurerm_managed_disk.example_disk_0.id
virtual_machine_id = azurerm_virtual_machine.example.id
lun = 0
caching = "ReadWrite"
}
resource "azurerm_managed_disk" "example_disk_1" {
name = "exampleDisk1"
location = "East US"
resource_group_name = azurerm_resource_group.example.name
storage_account_type = "Standard_LRS"
create_option = "Empty"
disk_size_gb = 100
}
resource "azurerm_virtual_machine_data_disk_attachment" "example_attach_1" {
managed_disk_id = azurerm_managed_disk.example_disk_1.id
virtual_machine_id = azurerm_virtual_machine.example.id
lun = 1
caching = "ReadWrite"
}After Lucidity Integration
After Lucidity is integrated, the below modifications need to be made to ensure that changes to tags and volume_tags are ignored by Terraform. This prevents Terraform from attempting to manage or revert these properties, allowing Lucidity to handle them:
resource "azurerm_managed_disk" "example_disk_1" {
name = "exampleDisk1"
location = "East US"
resource_group_name = azurerm_resource_group.example.name
storage_account_type = "Standard_LRS"
create_option = "Empty"
disk_size_gb = 100
}
resource "azurerm_virtual_machine_data_disk_attachment" "example_attach_1" {
managed_disk_id = azurerm_managed_disk.example_disk_1.id
virtual_machine_id = azurerm_virtual_machine.example.id
lun = 1
caching = "ReadWrite"
}
lifecycle {
ignore_changes = [
"tags" // Managed by Lucidity
]
}Key Changes:
Managed Disk Resource: Added a tags property to label the resource as managed by Lucidity and included a lifecycle block to ignore changes to tags.
Disk Attachment Resource: Added a lifecycle block to ignore changes to tags and volume_tags. This ensures that property management carried out externally by Lucidity is not interfered with by Terraform.
Key Operations After Lucidity Integration for Scenario 2:
Adding the ignore_changes Block:
Users must manually add the ignore_changes directive to the Terraform configuration for both the managed disks and its attachment. This prevents Terraform from attempting to manage or revert tags and disk attributes that are handled by Lucidity or other external processes.
State Management:
Users need to manually remove the managed disk and disk attachment blocks from the Terraform configuration files when these resources are no longer managed by Terraform (perhaps because they are now managed by Lucidity).
Users must also remove the corresponding state file data using Terraform commands:
terraform state rm azurerm_managed_disk.example_disk_0terraform state rm azurerm_virtual_machine_data_disk_attachment.example_attach_0As an alternative to manually editing the state file and Terraform configuration, users can use the following command to refresh the state based on the actual infrastructure, effectively accepting any changes made outside of Terraform:
terraform apply --refresh-only --auto-approve
Benefits Post-Integration
No Issues with Auto Scaling: Once these steps are completed, users should not encounter any issues when Lucidity’s AutoScaler performs scaling actions such as expanding or shrinking disks. The ignore_changes block will prevent Terraform from interfering with these dynamic changes.
Scenario 3: Managing managed disks with Modules and Variable Files
Overview
In this scenario, Terraform modules, coupled with variable files, are employed for creating and managing managed disks outside of VMs. When Lucidity is integrated, it takes over some management aspects, and adjustments need to be made to accommodate this change.
Before Lucidity Integration
Initially, development teams use the modules by setting the required variables in terraform.tfvars files. An example of this setup might look like:
module "managed_disks" {
source = "./modules/managed_disks"
disks = var.disk_details
}
# In terraform.tfvars file
disk_details = {
"disk0" = {
size = 8
type = "Standard_LRS"
},
"disk1" = {
size = 16
type = "Premium_LRS"
}
}After Lucidity Integration
After integrating with Lucidity, it's important to ensure the module is adjusted and variable files are updated to reflect the changes.
module "managed_disks" {
source = "./modules/managed_disks"
disks = var.disk_details
}
# Updated terraform.tfvars file
disk_details = {
"disk0" = {
size_gb = 128 // Managed size remains, Lucidity might adjust dynamically
}
}
# Assuming the managed_disks module defines resources like this:
resource "azurerm_managed_disk" "example" {
for_each = var.disks
name = "ManagedDisk-${each.key}"
location = "East US"
resource_group_name = azurerm_resource_group.example.name
storage_account_type = each.value.type
disk_size_gb = each.value.size_gb
tags = {
"ManagedBy" = "Lucidity"
}
lifecycle {
ignore_changes = [
"disk_size_gb", // Size changes are managed by Lucidity
"tags"
]
}
}Key Changes
Variable File Adjustments: The terraform.tfvars file is modified to reflect only the resources that need to be managed by Terraform, aligning with Lucidity's scope of management.
Module Adaptation: Within the module, the lifecycle block is added to the Managed disk resource definitions to ignore changes in size and tags, as these might now be managed by Lucidity.
State Management: As Lucidity manages certain aspects of the disks, Terraform's state file may need to be updated accordingly to reflect the current infrastructure accurately.
Key Operations After Lucidity Integration
Once Lucidity is integrated:
Modify the Variable File:
Update terraform.tfvars to match the resources and parameters managed by Terraform.
Update the Module Configuration:
If needed, add or modify lifecycle blocks within your modules to prevent Terraform from attempting to manage aspects now handled by Lucidity.
State Reconciliation:
Use terraform apply --refresh-only --auto-approve to update the Terraform state if Lucidity has made changes to the resources.
Scenario 4: Creating and Onboarding New Partitions in Azure with Lucidity
Overview
In this scenario Azure users can create new disk partitions and directly manage them through Lucidity using Terraform. The process involves configuring the Terraform script to include necessary parameters and making an API call to Lucidity’s dashboard backend. This results in the creation of a new partition from an existing disk pool, which Lucidity then manages.
Terraform Script Configuration
Users need to modify their existing Terraform scripts to automate the partition creation and onboarding process. The script will include details like the partition name, instance ID, disk type, and Azure-specific settings such as disk cache options.
Here is a sample Terraform script tailored for Azure (Windows):
resource "null_resource" "create_new_mount_instance_windows" {
provisioner "local-exec" {
on_failure = "fail"
interpreter = ["PowerShell", "-Command"]
command = <<EOT
$uri = "http://<dashboardurl>/api/v1/partition/create"
$headers = @{
"Authorization" = "secretkey"
"X-Authtype" = "auth_key"
"X-Tenant" = "<tenantId>"
"accept" = "*/*"
"Content-Type" = "application/json"
"access-id" = "accesskey"
}
$body = @{
"diskType" = "Standard_LRS" # Azure specific disk type
"instance" = "<instanceid>"
"partition" = "E" # New partition to be created
"tenant" = "<tenantId>"
"diskCache" = "ReadWrite" # Azure specific cache setting
"azureCmkDiskEncryptionSetId" = "/subscriptions/<sub>/resourceGroups/<rgrp>/providers/Microsoft.Compute/diskEncryptionSets/<cmkId>"
"azureCmkPmkDiskEncryptionSetId" = "/subscriptions/<sub>/resourceGroups/<rgrp>/providers/Microsoft.Compute/diskEncryptionSets/<pmkId>"
} | ConvertTo-Json
Invoke-RestMethod -Uri $uri -Method Post -Headers $headers -Body $body
EOT
}
triggers = {
always_run = "${timestamp()}"
}
}Here is a sample Terraform script tailored for Azure (Linux):
resource "null_resource" "create_new_mount_instance_azure" {
provisioner "local-exec" {
command = <<EOT
#!/bin/bash
uri="http://<dashboardurl>/api/v1/partition/create"
headers=(
-H "Authorization: secretkey"
-H "X-Authtype: auth_key"
-H "X-Tenants: <tenantId>"
-H "X-Tenant: <tenantId>"
-H "accept: /"
-H "Content-Type: application/json"
-H "access-id: accesskey"
)
body=$(jq -n \
--arg diskType "<diskType>" \
--arg instance "<instanceid>" \
--arg partition "J" \
--arg tenant "<tenantId>" \
--arg azureCmkDiskEncryptionSetId "/subscriptions/<sub>/resourceGroups/<rgrp>/providers/Microsoft.Compute/diskEncryptionSets/<cmkId>" \
--arg azureCmkPmkDiskEncryptionSetId "/subscriptions/<sub>/resourceGroups/<rgrp>/providers/Microsoft.Compute/diskEncryptionSets/<pmkId>" \
'{
diskType: $diskType,
instance: $instance,
partition: $partition,
tenant: $tenant,
azureCmkDiskEncryptionSetId: $azureCmkDiskEncryptionSetId,
azureCmkPmkDiskEncryptionSetId: $azureCmkPmkDiskEncryptionSetId
}')
curl -X POST "${headers[@]}" -d "$body" "$uri"
EOT
}
triggers = {
always_run = "${timestamp()}"
}
}Parameters and Options:
diskType: Users should provide the disk type appropriate for their use case. Azure options include Premium_LRS, Standard_LRS, etc.
diskCache: For Azure, options like "None", "ReadOnly", and "ReadWrite" allow customization of how the disk interacts with cached data.
instance, tenant, and partition: Mandatory fields to specify the exact resource being managed.
azureCmkDiskEncryptionSetId and azureCmkPmkDiskEncryptionSetId are the customer-managed keys and platform-managed keys. These IDs link to the Azure disk encryption sets that encrypt the disk partitions. Users need to provide these if they have specific encryption requirements or default settings will be applied if left empty.
Integration Process:
Upon executing the updated Terraform script, an API call is made to Lucidity’s backend, which handles the creation and immediate integration of the new partition. This ensures that the new partition is set up for automated management actions, such as capacity adjustments based on usage.