BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News HashiCorp Terraform AWS Provider Introduces Significant Changes to Amazon S3 Bucket Resource

HashiCorp Terraform AWS Provider Introduces Significant Changes to Amazon S3 Bucket Resource

This item in japanese

HashiCorp has announced the release of version 4.0 of their Terraform AWS provider. This release introduces significant, breaking changes to the Amazon S3 bucket resource. The release also includes full lifecycle control over default resources, changes to the provider configuration, and improvements to handling plural data sources.

With this release, the aws_s3_bucket resource has been significantly refactored to reduce the overloaded top-level resource. Arguments and attributes on the aws_s3_bucket resource have been deprecated and transitioned to read-only computed arguments. Updates are now done to the new aws_s3_bucket_* resources. Previously, setting server side encryption on an S3 bucket would be handled this way:

resource "aws_s3_bucket" "example" {
  # ... other configuration ...
  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        kms_master_key_id = aws_kms_key.mykey.arn
        sse_algorithm     = "aws:kms"
      }
    }
  }
}

After upgrading to version 4.0, attempting to run the above code will return an error that server_side_encryption_configuration is read-only. The configuration should be updated to use the new aws_s3_bucket_server_side_encryption_configuration resource as shown below:

resource "aws_s3_bucket" "example" {
  # ... other configuration ...
}

resource "aws_s3_bucket_server_side_encryption_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  rule {
    apply_server_side_encryption_by_default {
      kms_master_key_id = aws_kms_key.mykey.arn
      sse_algorithm     = "aws:kms"
    }
  }
}

After updating to the new resource, it is recommended to run terraform import on each altered resource to prevent data loss.

The update has not been fully welcomed by the community as indicated by this issue opened against the GitHub repository. This is partially due to the wording within the release post that indicates the "aws_s3_bucket will remain as is until the next major release (5.0) of the Terraform AWS provider". As clarified by Justin Rezolk, community manager at HashiCorp, version 4.0 moves the original, deprecated attributes to read-only until version 5.0 at which point they may be removed. He continues by stating that:

This was not reflected in the blog post about the release (something we're working to address), and we recognize that this doesn't necessarily reflect what "deprecated" means in the software world. The thought here is that this would not break configurations, but rather that there would be no drift detection for computed attributes.

Some users have noted that the recommended approach of updating to the new resource and running terraform import is not scalable to organizations with large quantities of S3 buckets. User joe-a-t explains that "the issue is the scale of how many thousands times we would need to follow those instructions in literally hundreds of directories."

Based on feedback from the community, the Terraform AWS provider team will be exploring migration tooling that may be able to assist with migrating customer buckets. In the interim, Rezolk strongly recommends pinning the provider version to a version prior to 4.0.0 until the upgrade can be properly actioned.

The release also improves the handling of default resources within AWS such as the default VPC per region or the default subnet per availability zone. AWS recently updated their APIs to enable the full CRUD lifecycle on these default resources. This has allowed the Terraform provider to be updated to support creating and destroying the default resources.

Unlike typical Terraform resources, if the default subnet or VPC exists within the specified area, Terraform will adopt it into management instead of creating it. Additionally, terraform destroy will not delete the default items, but instead remove them from the Terraform state. Instead, force_destroy must be set to true to delete the default VPC or subnets.

This release updates plural data sources to better align with the provider design principles. With 4.0 all AWS provider plural data sources that are expected to return an array of results will now return an empty list if zero results are found.

There are a number of other updates to the provider configuration, including support for automatic resolution of FIPS endpoints. Previously it was necessary to list all the FIPS endpoints that were needed. While that is still supported, it is possible to set it to auto-resolve based on the available FIPS endpoints:

provider "aws" { 
  use_fips_endpoint = true 
}

More details about the release can be found in the upgrade guide and the changelog. All HashiCorp Learn content that includes S3 bucket management will also be updated to include the new resources.

About the Author

Rate this Article

Adoption
Style

BT