Finally, clean up the cluster setup if you wish, using the following command. This will delete the entire cluster and other cloud resources that were provisioned for the DIGIT setup.
cd DIGIT-DevOps/infra-as-code/terraform/my-digit-eks
terraformdestroy
Conclusion
We have successfully created infra on cloud, and deployed DIGIT in the cluster.
On AWS
Pre-read:
Prerequisites
You will get a Secret Access Key and Access Key ID. Save them.
Open the terminal and run the following command. You have already installed the AWS CLI, and you have the credentials saved. (Provide the credentials. You can leave the region and output format blank).
The above will create the following file in your machine as /Users/.aws/credentials.
Before we provision cloud resources, we must understand what resources need to be provisioned by Terraform to deploy DIGIT.
The following picture shows the key components (EKS, Worker Nodes, Postgress DB, EBS Volumes, Load Balancer).
EKS Architecture for DIGIT Setup
Considering the above deployment architecture, the following is the resource graph we will provision using Terraform in a standard way so that every time and for every env, it will have the same infra.
EKS control plane (Kubernetes master)
Work node group (VMs with the estimated number of vCPUs, memory)
EBS volumes (Persistent volumes)
RDS (PostGres)
VPCs (Private network)
Users to access, deploy, and read-only
Understand the resource graph in Terraform script:
Here, we have already written the Terraform script that provisions the production-grade DIGIT infra and can be customised with the specified configuration.
Example:
VPC Resources:
VPC
Subnets
Internet Gateway
Route Table
EKS Cluster Resources:
IAM Role to allow EKS service to manage other AWS services.
EC2 Security Group to allow networking traffic with the EKS cluster.
EKS Cluster.
EKS Worker Nodes Resources:
IAM role allowing Kubernetes actions to access other AWS services.
EC2 Security Group to allow networking traffic.
Data source to fetch the latest EKS worker AMI.
AutoScaling Launch Configuration to configure worker instances.
AutoScaling Group to launch worker instances.
Database
Configuration in this directory creates a set of RDS resources, including DB instance, DB subnet group, and DB parameter group.
Storage Module
Configuration in this directory creates EBS volume and attaches it together.
The following main.tf with create s3 bucket to store all the state of the execution to keep track.
provider "aws" {
region = "ap-south-1"
}
#This is a bucket name that you can name as you wish
resource "aws_s3_bucket" "terraform_state" {
bucket = "try-workshop-yourname"
versioning {
enabled = true
}
lifecycle {
prevent_destroy = true
}
}
#This is a bucket name that you can name as you wish
resource "aws_dynamodb_table" "terraform_state_lock" {
name = "try-workshop-yourname"
read_capacity = 1
write_capacity = 1
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
The following main.tf contains the detailed resource definitions that need to be provisioned.
You can define your configurations in variables.tf and provide the env-specific cloud requirements so that using the same Terraform template, you can customise the configurations.
## Add Cluster Name
variable "cluster_name" {
default = "<Desired Cluster name>" #eg: my-digit-eks
}
## Add vpc_cidr_block
variable "vpc_cidr_block" {
default = "CIDR"
}
# If you want prod grade N/W, you can define HA, DRS with multi zone
variable "network_availability_zones" {
default = ["ap-south-1b", "ap-south-1a"]
}
# Which zone, it matters
variable "availability_zones" {
default = ["ap-south-1b"]
}
variable "kubernetes_version" {
default = "1.18"
}
# instance type for your worker nodes like r5a.large is 8 vCPU and 16GB RAM
variable "instance_type" {
default = "r5a.large"
}
# spot instance configuration
variable "override_instance_types" {
default = ["r5a.large", "r5ad.large", "r5d.large", "t3a.xlarge"]
}
# number of machines as per estimate
variable "number_of_worker_nodes" {
default = "3"
}
##Add ssh key in case you want to ssh to nodes
variable "ssh_key_name" {
default = "ssh key name"
}
# terraform users ssh public key, you need to one for you, refer below to create yours
variable "iam_keybase_user" {
default = "keybase:egovterraform"
}
# will be prompted to provide during the execution
variable "db_password" {}
Important: Create your keybase key before you run the terraform
Use this URL https://keybase.io/ to create your PGP key to create both public and private keys in your machine. Upload the public key into the keybase account that you have created, give it a name, and ensure that you mention that in your Terraform. This allows you to encrypt sensitive information.
You can use this portal to decrypt your secret key. To decrypt a PGP message, upload the PGP message, PGP private key, and passphrase.
Run Terraform
Now that we know what the Terraform script does, the resources graph that it provisions, and what custom values should be given to your env, let us begin to run the Terraform scripts to provision infra required for deploying DIGIT on AWS.
First CD into the following directory, run the following command 1-by-1, and watch the output closely.
cd DIGIT-DevOps/infra-as-code/terraform/sample-aws/remote-state
terraform init
terraform plan
terraform apply
cd DIGIT-DevOps/infra-as-code/terraform/sample-aws
terraform init
terraform plan
terraform apply
Upon successful execution, the following resources get created, which can be verified by the command "terraform output".
s3 bucket: to store terraform state.
Network: VPC, security groups.
IAM users auth: using keybase to create admin, deployer, the user. Use this URL https://keybase.io/ to create your own PGP key to create both public and private keys in your machine. Upload the public key into the keybase account that you have just created, give it a name, and mention that in your terraform. This allows you to encrypt sensitive information.
You can use this portal to decrypt your secret key. To decrypt a PGP message, upload the PGP message, PGP private key, and passphrase.
EKS cluster: with master(s) & worker node(s).
Storage(s): for es-master, es-data-v1, es-master-infra, es-data-infra-v1, zookeeper, kafka, kafka-infra.
Use this link to get the kubeconfig from EKS in order to get the kubeconfig file and connect to the cluster from your local machine. This enables you deploy DIGIT services to the cluster.
aws sts get-caller-identity
# Run the below command and give the respective region-code and the cluster name
aws eks --region <region-code> update-kubeconfig --name <cluster_name>
Finally, verify that you can connect to the cluster by running the following command.
kubectl config use-context <your cluster name>
kubectl get nodes
NAME STATUS AGE VERSION OS-Image
ip-192-168-xx-1.ap-south-1.compute.internal Ready 45d v1.15.10-eks-bac369 Amazon Linux 2
ip-192-168-xx-2.ap-south-1.compute.internal Ready 45d v1.15.10-eks-bac369 Amazon Linux 2
ip-192-168-xx-3.ap-south-1.compute.internal Ready 45d v1.15.10-eks-bac369 Amazon Linux 2
ip-192-168-xx-4.ap-south-1.compute.internal Ready 45d v1.15.10-eks-bac369 Amazon Linux 2