Gruntwork release 2019-03
Guides / Update Guides / Releases / 2019-03
This page is lists all the updates to the Gruntwork Infrastructure as Code
Library that were released in 2019-03. For instructions
on how to use these updates in your code, check out the updating
documentation.
Here are the repos that were updated:
Published: 3/27/2019 | Modules affected: ec2-backup | Release notes
- This module has been updated to use node 8.10, as node 6.10 was deprecated in AWS Lambda.
Published: 3/15/2019 | Modules affected: jenkins-server | Release notes
- Update the version of server-groupmodule for Jenkins tov0.6.25
Published: 3/18/2019 | Modules affected: ecs-service, ecs-service-with-alb | Release notes
- ecs-service[BREAKING]
- ecs-service-with-alb[BREAKING]
This release introduces support for AWS provider version 2.X:
- Fix deprecated usage of placement_strategyand replace withordered_placement_strategy.
This change is backwards incompatible on certain versions of the AWS provider. Specifically:
- ecs-serviceand- ecs-service-with-albis no longer compatible with AWS provider version- <1.17.0.
- ecs-serviceand- ecs-service-with-albwill recreate the- ecs_serviceresource (delete + create) on AWS provider version- <2.1.0.
Special thanks to @fieldawarepiotr for contributions to help implement the changes in this release.
Published: 3/29/2019 | Modules affected: eks-cluster-workers, eks-scripts, eks-k8s-role-mapping | Release notes
This release introduces eks-scripts, a new module that contains helper scripts for working with EKS. The release ships with the script map-ec2-tags-to-node-labels, a python script that can run on the EC2 instance acting as an EKS worker to pull in the tags associated with the EC2 instance and map it to kubernetes node labels. You can then take the output to pass to the bootstrap script to set the labels in kubernetes.
Take a look at the eks-cluster-with-supporting-services example for example usage.
Additionally, this release introduces a few bug fixes for working with multiple ASG worker pools:
- eks-cluster-workersnow takes in a- name_prefixvariable that can be used to name the resources it creates with a prefix. Previously all the resources were named by the EKS cluster name, which leads to resource conflicts when there are multiple instances of the module.
- eks-k8s-role-mappingpreviously assumed there was only one worker IAM role, but when there are multiple worker pools, you can have multiple worker IAM roles. This release fixes that by expecting a list now for the worker IAM role name input.
Published: 3/5/2019 | Modules affected: eks-k8s-role-mapping, eks-cluster-workers, eks-cluster-control-plane | Release notes
This release does not introduce any changes to the underlying module features. Instead, this release focuses on documentation, examples, and test stability:
- Includes various documentation fixes around updating links since post split.
- Includes test stability improvements.
- Updated examples to split out a minimal EKS cluster from one that demonstrates the IAM roles.
- Includes python code formatting for eks-k8s-role-mapping.
Published: 3/6/2019 | Modules affected: kinesis | Release notes
The kinesis module now supports server-side encryption.
Published: 3/8/2019 | Modules affected: s3-cloudfront | Release notes
- Fix compatibility issues with AWS provider 2.0.0 
Published: 3/19/2019 | Modules affected: vpc-app, vpc-peering | Release notes
- You can now customize the CIDR block calculations for each "tier" of subnet in the vpc-appmodule using thepublic_subnet_bits,private_subnet_bits, andpersistence_subnet_bitsinput variables, each of which specifies the number of bits to add to the CIDR prefix when calculating subnet ranges.
- You can now enable public IPs to be enabled by default on public subnets in the vpc-appmodule by setting themap_public_ip_on_launchinput variable totrue.
- You can now configure the VPC peering connection using the new allow_remote_vpc_dns_resolution,allow_classic_link_to_remote_vpc, andallow_vpc_to_remote_classic_linkinput variables in thevpc-peeringmodule.