Friday, May 29, 2020

More control with your k8 clusters

Introduction

We have created our clusters and we have a few services running. How can we control the nodes that are created using kops

What are the AWS Resources Created
  • VPC
  • Subnet
    • 3 for the cluster master and nodes
    • 3 for bastions
  • Routes
    • 1 route for the Bastions
    • 3 routes for each region 
  • Routes
    • Bastion route table have routes 0.0.0.0/0 going to the internet gateway
    • Cluster route table have routes 0.0.0.0/0 going to the NAT gateway
  • Internet Gateway
  • Elastic IP
    • 3 public ip's attached to each of the 3 NAT gateways
  • NAT Gateways
    • 3 NAT Gateways for each region
  • Security Groups
    • 3 security groups for each of Bastion, Master & Nodes
  • Load Balancers
    • 1 for Master
    • 1 for Bastion
  • Launch Configurations
    • 1 each for Master, Node & Bastion
  • Autoscaling Groups
    • 1 each corresponding to Launch configurations
  • Instances
    • As requested for Master, Nodes and Bastion

Advanced Configuration

kops has the following parameters which enables you to control your cluster

  • master-count -> Allows you to specify the number of masters
  • master-size -> Allows you to specify the size of the master machine
  • node-count -> Allows you to specify the number of nodes
  • node-size -> Allows you to specify the size of the node machines
Use spot instances to reduce costs

It is no secret that Spot instances are much cheaper than on-demand instances. You can have the master and/or nodes as spot instances with this hack

Master ==> kops edit ig master-eu-west-1a --name k8.shivag.io --state s3://k8-kops-cluster-state-s3
Nodes ==> kops edit ig nodes --name k8.shivag.io --state s3://k8-kops-cluster-state-s3
Bastion ==> kops edit ig bastions --name k8.shivag.io --state s3://k8-kops-cluster-state-s3

apiVersion: kops.k8s.io/v1alpha2 kind: InstanceGroup metadata: creationTimestamp: "2020-05-18T19:28:46Z" labels: kops.k8s.io/cluster: k8.shivag.io name: master-eu-west-1a spec: image: kope.io/k8s-1.17-debian-stretch-amd64-hvm-ebs-2020-01-17 machineType: t2.medium maxPrice: "0.05" <=== Max cost for spot instance maxSize: 1 minSize: 1 nodeLabels: kops.k8s.io/instancegroup: master-eu-west-1a role: Master subnets: - eu-west-1a
Ensure you have the price corresponding to the instance you are using

Additional Security Group Assignment

The following flags on kops ensure you can have additional security groups assigned to the instance groups

--master-security-groups
--node-security-groups

Note: 
  • Security Groups are attached to VPC; we need to have the VPC Pre-created and specify the VPC during the create phase or we need to make this change later and propagate it; in which case we need to remove the security group and delete the security group before deleting the cluster
Additional Security Group for Bastions

ubuntu@ip-10-0-1-79:~$ kops edit ig bastions --name k8.shivag.io --state s3://shivag.kube-kops-state
piVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2020-05-21T21:40:33Z"
  generation: 1
  labels:
    kops.k8s.io/cluster: k8.shivag.io
  name: bastions
spec:
  additionalSecurityGroups:
  - sg-095b938fcbad614bc
  image: kope.io/k8s-1.17-debian-stretch-amd64-hvm-ebs-2020-01-17
  machineType: t2.micro
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: bastions
  role: Bastion
  subnets:
  - utility-eu-west-1a
  - utility-eu-west-1b
  - utility-eu-west-1c

S3Browser

S3 Browser is a small python flask app. Normal S3 browser utilities available on the web require you to create a AccessKey & SecretID or an User Id. This creates a potential security breach.

This utility relies on IAM role assigned to the AWS resource where this is running. 

Features available:

  • Browse a bucket
  • Upload files into a folder
  • Download files from bucker/folder

Monday, May 18, 2020

Install Kubernetes dashboard

Introduction

It is understood that you already have a successfully created kubernetes cluster. If you are having issues follow instructions from Create Your own Kubernetes cluster on AWS

Ensure the Cluster is up and Running

We have the cluster operational; Validate the cluster as mentioned in the previous article. Once all the nodes and master are up and running we can also test this internally with kubectl
ubuntu@ip-10-0-1-79:~$ kubectl get nodes
NAME                                          STATUS   ROLES    AGE     VERSION
ip-172-20-35-205.eu-west-1.compute.internal   Ready    master   5h42m   v1.18.2
ip-172-20-53-48.eu-west-1.compute.internal    Ready    node     5h41m   v1.18.2
ip-172-20-67-18.eu-west-1.compute.internal    Ready    node     5h41m   v1.18.2
ip-172-20-97-65.eu-west-1.compute.internal    Ready    node     5h40m   v1.18.2
This ensures the master and node are all ready and it also displays the version of Kubernetes in each of the nodes

Create the dashboard components

We need to create the dashboard components. Again, there are multiple places where I found multiple leads. This works with clusters created using kops on AWS. 
ubuntu@ip-10-0-1-79:~$ kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta1/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/kubernetes-metrics-scraper created
We have now successfully created the kuberneter-dashboard and the service is implemented. The following steps help us browse the dashboard

To browse the dashboard we need to perform the following actions
  1. Create a service account
    ubuntu@ip-10-0-1-79:~$ kubectl create serviceaccount dashboard-admin-sa
    serviceaccount/dashboard-admin-sa created
  2. Bind the account created to the cluster-admin role
    ubuntu@ip-10-0-1-79:~$ kubectl create clusterrolebinding dashboard-admin-sa --clusterrole=cluster-admin --serviceaccount=default:dashboard-admin-sa
    clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin-sa created
  3. Find the url how your cluster can be invoked
    ubuntu@ip-10-0-1-79:~$ kubectl cluster-info
    Kubernetes master is running at https://api-k8-shivag-io-covt8s-**********.eu-west-1.elb.amazonaws.com
    KubeDNS is running at https://api-k8-shivag-io-covt8s-**********.eu-west-1.elb.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
  4. Find the Kubernetes cluster password
    ubuntu@ip-10-0-1-79:~$ kops get secrets kube --type secret -oplaintext --name k8.shivag.io --state s3://k8-kops-cluster-state-s3
    FURT*****YPyC*****S6w4*****GPVfd
  5. Find the dashboard-admin-sa user token
    ubuntu@ip-10-0-1-79:~$ kubectl get secret $(kubectl get serviceaccount dashboard-admin-sa -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode
    eyJhb**********1NiIs**********VETHQ**********FVZWH**********cUhjW**********LdGp0**********.eyJp**********Jlcm5**********nZpY2**********Iiwia**********lcy5p**********NlYWN**********W1lc3**********ZWZhd**********iZXJu**********zZXJ2**********VudC9**********eyJhb**********1NiIs**********VETHQ**********FVZWH**********cUhjW**********LdGp0**********.eyJp**********Jlcm5**********nZpY2**********Iiwia**********lcy5p**********NlYWN**********W1lc3**********ZWZhd**********iZXJu**********zZXJ2**********VudC9**********eyJhb**********1NiIs**********VETHQ**********FVZWH**********cUhjW**********LdGp0**********.eyJp**********Jlcm5**********nZpY2**********Iiwia**********lcy5p**********NlYWN**********W1lc3**********ZWZhd**********iZXJu**********zZXJ2**********VudC9**********eyJhb**********1NiIs**********VETHQ**********FVZWH**********cUhjW**********LdGp0**********.eyJp**********Jlcm5**********nZpY2**********Iiwia**********lcy5p**********NlYWN********
  6. The link for the dashboard when installed locally is 
    http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
    substitute the http://localhost:8001 with the cluster url 
    https://api-k8-shivag-io-covt8s-1234567890.eu-west-1.elb.amazonaws.com
    the final url is 
    https://api-k8-shivag-io-covt8s-1234567890.eu-west-1.elb.amazonaws.com//api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
Now we have the url for the dashboard; we have two password prompts 
  • First for the kubernetes cluster where the user is "admin" -> password from Step 4

  • Next the token -> Token from Step 5






Create your own Kubernetes Cluster on AWS

Motivation
I have been looking into Kubernetes and wanted to start with defining my own cluster.

Why this blog?
I searched on a lot of ways to implement the k8 clusters. I started out with kubectl. everything went without a hitch until I had to access EBS volumes on AWS. I stumbled into a lot of issues; posting my issues on stackoverflow.com, there was a comment to implement using kops. I searched on installing the cluster using kops and found some of the information available cryptic. I have tried to demystify some of the complexities in using kops.

It is very easy to manage your clusters once you understand the basics of kops

Why my own cluster?
AWS, Google & Azure provide interfaces with Kubernetes - they simplify everything for you very much; while using these services is easy, you miss out on some of the complexities and the understanding that comes with implementing the cluster by yourself.

Versions used
  • kops - 1.16.2 (at the time of writing this blog version 1.17 is available)
  • kubernetes - 1.18.2
Background
Kops is an utility which enables you to create and maintain Kubernetes clusters. We are using it on AWS and let us launch kops from an EC2 machine.

There are two ways to go about this
  • Define an IAM user who has the privileges to create/modify/delete the resources and use this user
  • Attach an IAM role with the privileges to create/modify/delete the resources
I am choosing the 2nd option.

Building Blocks

  1. Create a VPC in region eu-west-1 - Name k8-kops-vpc
    Ensure DHCP resolution is enabled and a DHCP option set is associated
  2. Create 3 subnets for zones a,b,c - k8-kops-subnet-1a, k8-ops-subnet-1b, k8-kops-subnet-1c
  3. Create a private hosted zone shivag.io associate with the VPC k8-lops-vpc
  4. Create a internet gateway - k8-kops-igw
  5. Route table k8-kops-rtb
  6. Route in route table for destination 0.0.0.0/0 to internet gateway k8-kops-igw
  7. Associate the 3 subnets k8-kops-subnet-1a, k8-ops-subnet-1b, k8-kops-subnet-1c to the route table k8-kops-rtb
  8. Create a security group associated with VPC k8-kops-sg
  9. Create inbound rules to enable
    1. All traffic with your current machine - to do any interaction with the ec2 machines
    2. All traffic with the security group k8-kops-sg
  10. Create an S3 bucket k8-kops-cluster-s3
  11. Create an IAM role k8-kops-role
  12. Add the following inline policies
    1. AllowIAM on Resource "*"
      "iam:CreateGroup",
      "iam:ListRoles",
      "iam:ListRolePolicies",
      "iam:AttachGroupPolicy",
      "iam:CreateUser",
      "iam:AddUserToGroup",
      "iam:ListInstanceProfiles",
      "iam:GetInstanceProfile",
      "iam:CreateInstanceProfile",
      "iam:GetRole",
      "iam:GetRolePolicy",
      "iam:PutRolePolicy",
      "iam:CreateRole",
      "iam:AddRoleToInstanceProfile",
      "iam:CreateServiceLinkedRole",
      "iam:DeleteRole",
      "iam:DeleteInstanceProfile",
      "iam:RemoveRoleFromInstanceProfile",
      "iam:DeleteRolePolicy",
      "iam:PassRole"
    2. AllowS3 on
      Resource 
          arn:aws:s3:::k8-kops-cluster-s3
          arn:aws:s3:::k8-kops-cluster-state-s3
          arn:aws:s3:::k8-kops-cluster-s3/*
          arn:aws:s3:::k8-kops-cluster-state-s3/*
      Actions
          "s3:*"
    3. AllowEc2 on Resource "*"
      "ec2:DescribeAvailabilityZones",
      "ec2:DescribeKeyPairs",
      "ec2:DescribeSecurityGroups",
      "ec2:DescribeVolumes",
      "ec2:DescribeDhcpOptions",
      "ec2:DescribeInternetGateways",
      "ec2:DescribeRouteTables",
      "ec2:DescribeSubnets",
      "ec2:DescribeVpcs",
      "ec2:DescribeVpcAttribute",
      "ec2:DescribeTags",
      "ec2:DescribeImages",
      "ec2:DescribeNatGateways",
      "ec2:DescribeAddresses",
      "ec2:DescribeRegion",
      "ec2:CreateVpc",
      "ec2:CreateDhcpOptions",
      "ec2:CreateRouteTable",
      "ec2:CreateRoute",
      "ec2:CreateSubnet",
      "ec2:CreateSecurityGroup",
      "ec2:ModifyVpcAttribute",
      "ec2:ImportKeyPair",
      "ec2:AssociateDhcpOptions",
      "ec2:AuthorizeSecurityGroupEgress",
      "ec2:AuthorizeSecurityGroupIngress",
      "ec2:CreateVolume",
      "ec2:CreateTags",
      "ec2:AssociateRouteTable",
      "ec2:AllocateAddress",
      "ec2:CreateInternetGateway",
      "ec2:CreateNatGateway",
      "ec2:AttachInternetGateway",
      "ec2:AttachVolume",
      "ec2:DeleteKeyPair",
      "ec2:DeleteDhcpOptions",
      "ec2:DeleteRouteTable",
      "ec2:DeleteNatGateway",
      "ec2:DeleteInternetGateway",
      "ec2:RevokeSecurityGroupIngress",
      "ec2:RevokeSecurityGroupEgress",
      "ec2:DeleteSubnet",
      "ec2:DeleteSecurityGroup",
      "ec2:DeleteVolume",
      "ec2:TerminateInstances",
      "ec2:DeleteVpc",
      "ec2:DetachInternetGateway",
      "ec2:ReleaseAddress"
    4. AllowAutoscaling on Resource "*"
      "autoscaling:DescribeTags",
      "autoscaling:DescribeLaunchConfigurations",
      "autoscaling:CreateLaunchConfiguration",
      "autoscaling:DescribeAutoScalingGroups",
      "autoscaling:CreateAutoScalingGroup",
      "autoscaling:AttachLoadBalancers",
      "autoscaling:EnableMetricsCollection",
      "autoscaling:UpdateAutoScalingGroup",
      "autoscaling:DeleteAutoscalingGroup",
      "autoscaling:DeleteLaunchConfiguration"
    5. AllowELB on Resource "*"
      "elasticloadbalancing:DescribeLoadBalancerAttributes",
      "elasticloadbalancing:DescribeLoadBalancers",
      "elasticloadbalancing:DescribeTargetGroups"
      "elasticloadbalancing:ModifyLoadBalancerAttributes",
      "elasticloadbalancing:ConfigureHealthCheck",
      "elasticloadbalancing:CreateLoadBalancer",
      "elasticloadbalancing:DescribeTags",
      "elasticloadbalancing:AddTags",
      "elasticloadbalancing:DeleteTags",
      "elasticloadbalancing:DeleteLoadBalancer"
    6. AllowRoute53 on Resource "*"
      "route53:GetHostedZone",
      "route53:ListHostedZones",
      "route53:ListResourceRecordSets",
      "route53:ListHostedZonesByName",
      "route53:AssociateVPCWithHostedZone",
      "route53:ChangeResourceRecordSets"
  13. Create an EC2 with the following parameters - This ec2 will be the kops instance where you can manage the Kubernetes cluster
    1. AMI : Ubuntu
    2. VPC : k8-kops-vpc
    3. Subnet: any of k8-kops-subnet-1a, k8-ops-subnet-1b, k8-kops-subnet-1c
    4. Security Group : k8-kops-sg
    5. IAM : k8-kops-role
    6. Instance Type : t2.micro
    7. User Data have the following code

      #!/bin/bash

      #
      # Update the repository and upgrade all the packages
      #
      apt-get update
      apt-get upgrade -y

      #
      # Install awscli
      #
      apt-get install -y awscli

      #
      # Set the aws region variables in the profile for every user logging in
      #
      echo "***** Set aws default region"
      cat > /etc/profile.d/aws-default.sh <export AWS_DEFAULT_REGION=eu-west-1
      export AWS_REGION=eu-west-1
      EOF

      #
      # Install packages required
      #
      apt-get install apt-transport-https

      #
      # add the pgp key for kubernetes repository
      #
      curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

      #
      # add the kubernetes repository location
      #
      cat < /etc/apt/sources.list.d/kubernetes.list
      deb http://apt.kubernetes.io/ kubernetes-xenial main
      EOF
      apt-get update

      #
      #Install kubectl
      #
      apt-get install kubectl

      #
      # Download the latest kops executable
      #
      curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
      chmod +x kops-linux-amd64

      #
      # Move the latest version to the path
      #
      sudo mv kops-linux-amd64 /usr/local/bin/kops

Now we have built an EC2 Instance in AWS which has all the permissions and the software that is needed to manage you kubernetes cluster using kops

All we need to do is invoke the kops commands as needed

Following are the parameters we pass for kops
  • Name : k8.shivag.io # specifies the name of the cluster
  • Zones : eu-west-1a, eu-west-1b, eu-west-1c # indicates the zones the cluster will be having the nodes in
  • State : s3://k8-kops-cluster-state-s3 # the s3 bucket where the configuration will be stored
  • Kubernetes version : 1.18.2 # version of kubernetes to use
  • Networking : calio # the networking that will be used within the pods
  • dns-zone : shivag.io # the hosted zone where we will have route 53 entries
  • Bastion : "true" # indicates creation of an additional instance where we can login and manage the cluster
Create Cluster
kops create cluster \
--name k8.shivag.io \
--zones eu-west-1a,eu-west-1b,eu-west-1c \
--state s3://k8-kops-cluster-state-s3 \
--kubernetes-version 1.18.2 \
--master-count 1 \
--master-size=t2.medium \
--node-count 3 \
--node-size=t2.micro \
--cloud=aws \
--v=5 \
--networking calico \
--dns-zone=shivag.io \
--topology private \
--bastion="true" \
--dns private
This command creates the configuration on the s3 state folder

Update Cluster
kops update cluster \
--name k8.shivag.io \
--state s3://k8-kops-cluster-state-s3 \
--v=5 \
--yes

This command creates all the resources in AWS 

Validate Cluster
kops validate cluster \
--name k8.shivag.io \
--state s3://k8-kops-cluster-state-s3
Delete Cluster
kops delete cluster \
--name k8.shivag.io \
--state s3://k8-kops-cluster-state-s3 \
--yes

This command deletes all the resources and the configuration from s3 state folder