EKS
:beginner: Plan:
terraform init && terraform plan -var aws_access_key=<<ACCESS KEY>> -var aws_secret_key=<<SECRET KEY>> -var count_vms=0 -var disk_sizegb=30 -var distro=centos7 -var key_name=testdwai -var elbcertpath=~/Downloads/testdwaicert.pem -var private_key_path=~/Downloads/testdwai.pem -var region=us-east-1 -var tag_prefix=k8snodes -out "run.plan"
:beginner: Apply:
terraform apply "run.plan"
:beginner: stackDeploy with aws ingress controller, EFK, prometheus-operator, consul-server/ui:
export KUBECONFIG=~/aws-terraform/kubeconfig_test-eks && ./deploystack.sh && cd helm && terraform init && terraform plan -out helm.plan && terraform apply helm.plan && kubectl apply -f kubernetes-manifests.yaml && kubectl apply -f all-in-one.yaml
:beginner: Destroy stack:
export KUBECONFIG=~/aws-terraform/kubeconfig_test-eks && kubectl delete -f kubernetes-manifests.yaml && kubectl delete -f all-in-one.yaml && terraform destroy --auto-approve
:beginner: Destroy cluster and other aws resources:
terraform destroy -var aws_access_key=<<ACCESS KEY>> -var aws_secret_key=<<SECRET KEY>> -var count_vms=0 -var disk_sizegb=30 -var distro=centos7 -var key_name=testdwai -var elbcertpath=~/Downloads/testdwaicert.pem -var private_key_path=~/Downloads/testdwai.pem -var region=us-east-1 -var tag_prefix=k8snodes --auto-approve
modules
cloudposse/ecr/aws 0.19.0 for ecr
eks-cluster.node_groups in .terraform/modules/eks-cluster/terraform-aws-eks-12.1.0/modules/node_groups
Instance templates are being used from .terraform/modules/eks-cluster/terraform-aws-eks-12.1.0
provider plugins
Key pair name
value in the console for key_name
value in variable.tf
when performing terraform plan -out "run.plan"
. Please keep you private pem file handy and note the path. One can also create a seperate certificate from the private key as follows to be used with the elb secure port openssl req -new -x509 -key privkey.pem -out certname.pem -days 3650
.AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY="<< >>"
You can generate new ones from your EC2 console via the url for your
<<account_user>>
-https://console.aws.amazon.com/iam/home?region=us-east-2#/users/<<account_user>>?section=security_credentials
.
remote-exe
via ssh agentless to run locally in the project to target server - from the ec2 console for the region - us-east-1 or any other region explicitly that you are passing as paramameter. Please make sure to have the private key created or public key imported as a security key for the passed regiongit clone https://github.com/dwaiba/aws-terraform && cd aws-terraform && terraform init && terraform plan -out "run.plan" && terraform apply "run.plan"
.Post provisioning Automatic
curl http://169.254.169.254/latest/user-data|sudo sh
- via terraformremote-exec
executesprep-centos7.txt
shell-script
file contents of this repo available as user-data, post provisioning. Various type besidesshell-script
including directcloud-init
commands may be passed as multipart as part of the user-data via terraformremote-exec
.
- To destroy
terraform destroy
AWS RHEl 7.7 AMIs per regios as per
aws ec2 describe-images --owners 309956199498 --query 'Images[*].[CreationDate,Name,ImageId,OwnerId]' --filters "Name=name,Values=RHEL-7.7?*GA*" --region <<region-name>> --output table | sort -r
- Red Hat Soln. #15356
AWS CentOS 7.7 AMIs per regios as per
aws ec2 describe-images --query 'Images[*].[CreationDate,Name,ImageId,OwnerId]' --filters "Name=name,Values=CentOS*7.7*x86_64*" --region <<region-name>> --output table| sort -r
AWS CentOS AMIs per regions used in map is as per maintained CentOS Wiki
As per Output intructions for each DNS output.
chmod 400 <<your private pem file>>.pem && ssh -i <<your private pem file>>.pem ec2-user/centos@<<public address>>
https://github.com/dwaiba/aws-terraform
:beginner: Pre-req:
You can generate new ones from your EC2 console via the url for your
<<account_user>>
-https://console.aws.amazon.com/iam/home?region=us-east-2#/users/<<account_user>>?section=security_credentials
.
:beginner: Plan:
terraform init && terraform plan -var aws_access_key=AKIAJBXBOC5JMB5VGGVQ -var aws_secret_key=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 -var count_vms=0 -var disk_sizegb=30 -var distro=centos7 -var key_name=testdwai -var elbcertpath=~/Downloads/testdwaicert.pem -var private_key_path=~/Downloads/testdwai.pem -var region=us-east-1 -var tag_prefix=k8snodes -out "run.plan"
:beginner: Apply:
terraform apply "run.plan"
:beginner: Destroy:
terraform destroy -var aws_access_key=<<ACCESS KEY>> -var aws_secret_key=<<SECRET KEY>> -var count_vms=0 -var disk_sizegb=30 -var distro=centos7 -var key_name=testdwai -var elbcertpath=~/Downloads/testdwaicert.pem -var private_key_path=~/Downloads/testdwai.pem -var region=us-east-1 -var tag_prefix=k8snodes --auto-approve
curl -sLSf https://get.k3sup.dev | sh && sudo install -m k3sup /usr/local/bin/
One can now use k3sup
Obtain the Public IPs for the instances running as such aws ec2 describe-instances
or obtain just the Public IPs as aws ec2 describe-instances --query "Reservations[*].Instances[*].PublicIpAddress" --output=text
one can use to create a cluster with first ip as master <pre>k3sup install --cluster --ip <<Any of the Public IPs>> --user <<ec2-user or centos as per distro>> --ssh-key <<the location of the aws private key like ~/aws-terraform/yourpemkey.pem>>
</pre>
one can also join another IP as master or node For master: <pre>k3sup join --server --ip <<Any of the other Public IPs>> --user <<ec2-user or centos as per distro>> --ssh-key <<the location of the aws private key like ~/aws-terraform/yourpemkey.pem>> --server-ip <<The Server Public IP>>
</pre>
or as a simple script:
export SERVER_IP=$(terraform output -json instance_ips|jq -r '.[]'|head -n 1)
k3sup install --cluster --ip $SERVER_IP --user ec2-user --ssh-key 'Your Private SSH Key Location'--k3s-extra-args '--no-deploy traefik --docker'
terraform output -json instance_ips|jq -r '.[]'|tail -n+2|xargs -I {} k3sup join --server-ip $SERVER_IP --ip {} --user ec2-user --ssh-key 'Your Private SSH Key Location' --k3s-extra-args --docker
export KUBECONFIG=`pwd`/kubeconfig
kubectl get nodes -o wide -w
kubeadm init
One can now use weavenet and join other workers
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
Please report bugs by opening an issue in the GitHub Issue Tracker. Bugs have auto template defined. Please view it here
Patches can be submitted as GitHub pull requests. If using GitHub please make sure your branch applies to the current master as a ‘fast forward’ merge (i.e. without creating a merge commit). Use the git rebase
command to update your branch to the current master if necessary.