aws-terraform

EKS

View the Project on GitHub dwaiba/aws-terraform

Table of Contents (EKS and/or AWS RHEL77/centos77 with disks farm with Terraform in any region)

  1. EKS TL;DR

    Topology

    Modules and Providers

  2. EKS and/or AWS bastion user-data with Terraform - RHEL 7.7 and CentOS 7.7 in all regions with disk and with tools
  3. login
  4. Automatic provisioning
  5. Create a HA k8s Cluster as IAAS
  6. Reporting bugs
  7. Patches and pull requests
  8. License
  9. Code of conduct

EKS TL;DR

:beginner: Plan:

terraform init && terraform plan -var aws_access_key=<<ACCESS KEY>> -var aws_secret_key=<<SECRET KEY>> -var count_vms=0 -var disk_sizegb=30 -var distro=centos7 -var key_name=testdwai -var elbcertpath=~/Downloads/testdwaicert.pem -var private_key_path=~/Downloads/testdwai.pem -var region=us-east-1 -var tag_prefix=k8snodes -out "run.plan"

:beginner: Apply:

terraform apply "run.plan"

:beginner: stackDeploy with aws ingress controller, EFK, prometheus-operator, consul-server/ui:

export KUBECONFIG=~/aws-terraform/kubeconfig_test-eks && ./deploystack.sh && cd helm && terraform init && terraform plan -out helm.plan && terraform apply helm.plan && kubectl apply -f kubernetes-manifests.yaml && kubectl apply -f all-in-one.yaml

:beginner: Destroy stack: export KUBECONFIG=~/aws-terraform/kubeconfig_test-eks && kubectl delete -f kubernetes-manifests.yaml && kubectl delete -f all-in-one.yaml && terraform destroy --auto-approve

:beginner: Destroy cluster and other aws resources:

terraform destroy -var aws_access_key=<<ACCESS KEY>> -var aws_secret_key=<<SECRET KEY>> -var count_vms=0 -var disk_sizegb=30 -var distro=centos7 -var key_name=testdwai -var elbcertpath=~/Downloads/testdwaicert.pem -var private_key_path=~/Downloads/testdwai.pem -var region=us-east-1 -var tag_prefix=k8snodes --auto-approve

Topology

Modules and Providers

modules

cloudposse/ecr/aws 0.19.0 for ecr

eks-cluster.node_groups in .terraform/modules/eks-cluster/terraform-aws-eks-12.1.0/modules/node_groups Instance templates are being used from .terraform/modules/eks-cluster/terraform-aws-eks-12.1.0

provider plugins

EKS and/or AWS bastion user-data with Terraform - RHEL 7.7 and CentOS 7.7 in all regions with disk and with tools

  1. Download and Install Terraform
  2. Create new pair via EC2 console for your account and region (us-east-2 default) and use the corresponding Key pair name value in the console for key_name value in variable.tfwhen performing terraform plan -out "run.plan". Please keep you private pem file handy and note the path. One can also create a seperate certificate from the private key as follows to be used with the elb secure port openssl req -new -x509 -key privkey.pem -out certname.pem -days 3650.
  3. Collect your AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY="<< >>"

You can generate new ones from your EC2 console via the url for your <<account_user>> - https://console.aws.amazon.com/iam/home?region=us-east-2#/users/<<account_user>>?section=security_credentials.

  1. Ingress allowance rule is for all and for remote-exe via ssh agentless to run locally in the project to target server - from the ec2 console for the region - us-east-1 or any other region explicitly that you are passing as paramameter. Please make sure to have the private key created or public key imported as a security key for the passed region
  2. git clone https://github.com/dwaiba/aws-terraform && cd aws-terraform && terraform init && terraform plan -out "run.plan" && terraform apply "run.plan".

Post provisioning Automatic curl http://169.254.169.254/latest/user-data|sudo sh - via terraform remote-exec executes prep-centos7.txt shell-script file contents of this repo available as user-data, post provisioning. Various type besides shell-script including direct cloud-init commands may be passed as multipart as part of the user-data via terraform remote-exec.

  1. To destroy terraform destroy

AWS RHEl 7.7 AMIs per regios as per aws ec2 describe-images --owners 309956199498 --query 'Images[*].[CreationDate,Name,ImageId,OwnerId]' --filters "Name=name,Values=RHEL-7.7?*GA*" --region <<region-name>> --output table | sort -r - Red Hat Soln. #15356

AWS CentOS 7.7 AMIs per regios as per aws ec2 describe-images --query 'Images[*].[CreationDate,Name,ImageId,OwnerId]' --filters "Name=name,Values=CentOS*7.7*x86_64*" --region <<region-name>> --output table| sort -r

AWS CentOS AMIs per regions used in map is as per maintained CentOS Wiki

Login

As per Output intructions for each DNS output.

chmod 400 <<your private pem file>>.pem && ssh -i <<your private pem file>>.pem ec2-user/centos@<<public address>>

:high_brightness: Automatic Provisioning

https://github.com/dwaiba/aws-terraform

:beginner: Pre-req:

  1. private pem file per region available locally and has chmod 400
  2. AWS Access key ID, Secret Access key should be available for aws account.

You can generate new ones from your EC2 console via the url for your <<account_user>> - https://console.aws.amazon.com/iam/home?region=us-east-2#/users/<<account_user>>?section=security_credentials.

:beginner: Plan:

terraform init && terraform plan -var aws_access_key=AKIAJBXBOC5JMB5VGGVQ -var aws_secret_key=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 -var count_vms=0 -var disk_sizegb=30 -var distro=centos7 -var key_name=testdwai -var elbcertpath=~/Downloads/testdwaicert.pem -var private_key_path=~/Downloads/testdwai.pem -var region=us-east-1 -var tag_prefix=k8snodes -out "run.plan"

:beginner: Apply:

terraform apply "run.plan"

:beginner: Destroy:

terraform destroy -var aws_access_key=<<ACCESS KEY>> -var aws_secret_key=<<SECRET KEY>> -var count_vms=0 -var disk_sizegb=30 -var distro=centos7 -var key_name=testdwai -var elbcertpath=~/Downloads/testdwaicert.pem -var private_key_path=~/Downloads/testdwai.pem -var region=us-east-1 -var tag_prefix=k8snodes --auto-approve

Create a HA k8s Cluster as IAAS

curl -sLSf https://get.k3sup.dev | sh && sudo install -m k3sup /usr/local/bin/

One can now use k3sup

  1. Obtain the Public IPs for the instances running as such aws ec2 describe-instances or obtain just the Public IPs as aws ec2 describe-instances --query "Reservations[*].Instances[*].PublicIpAddress" --output=text

  2. one can use to create a cluster with first ip as master <pre>k3sup install --cluster --ip <<Any of the Public IPs>> --user <<ec2-user or centos as per distro>> --ssh-key <<the location of the aws private key like ~/aws-terraform/yourpemkey.pem>></pre>

  3. one can also join another IP as master or node For master: <pre>k3sup join --server --ip <<Any of the other Public IPs>> --user <<ec2-user or centos as per distro>> --ssh-key <<the location of the aws private key like ~/aws-terraform/yourpemkey.pem>> --server-ip <<The Server Public IP>> </pre>

or as a simple script:


export SERVER_IP=$(terraform output -json instance_ips|jq -r '.[]'|head -n 1)

k3sup install --cluster --ip $SERVER_IP --user ec2-user  --ssh-key 'Your Private SSH Key Location'--k3s-extra-args '--no-deploy traefik --docker'

terraform output -json instance_ips|jq -r '.[]'|tail -n+2|xargs -I {} k3sup join --server-ip $SERVER_IP --ip {}  --user ec2-user --ssh-key 'Your Private SSH Key Location' --k3s-extra-args --docker

export KUBECONFIG=`pwd`/kubeconfig
kubectl get nodes -o wide -w

kubeadm init

One can now use weavenet and join other workers


  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
  kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Reporting bugs

Please report bugs by opening an issue in the GitHub Issue Tracker. Bugs have auto template defined. Please view it here

Patches and pull requests

Patches can be submitted as GitHub pull requests. If using GitHub please make sure your branch applies to the current master as a ‘fast forward’ merge (i.e. without creating a merge commit). Use the git rebase command to update your branch to the current master if necessary.

License

Code of Conduct