Wednesday, December 9, 2020

Python Platform Ecosystem - Enabling execution of user code in a platform

Working on a SaaS Multi-tenant platform, one of  the key aspects is to enable an ecosystem around it.  Building an ecosystem involves allowing user code to be executed in the platform. This will ensure that users can augment the functionality of the platform with complementing functionality using their own code. Ensuring this functionality enables versatility, growth and adaption of the platform, however it comes with its own challenges.

One of the key aspects in building an ecosystem is to allow user code to be executed inside the platform. This raises the following challenges.

  • The user code should not interfere with the core platform and other users
  • These packages need to be separate from the packages used by core platform or other users
  • Installing of packages that will be used by the user code
  • Ensure the user code to ensure it uses the packages installed
It is a huge as to ensure all of the above. Searching the web; I found information scattered over on how this can be achieved. Let us look into detail on how these can be achieved

Segregation of code
There will be an exclusive area for each user where their code will reside along with the libraries they need for the execution of the code; This is easily achievable by providing each user with a directory structure which will house the code automatically by the platform. Since the activities behind the scene are performed by the platform, there is no concern for security

Identify packages to be installed by user
Each user in his project structure will have a PythonLib sub-folder where the python libraries will be placed. 

How do we find what packages are already available and what is needed?

Python provides a library pkg_resources. This package has a function WorkingSet() which provides the packages that are available. 
  • This can be called without any parameters to get the global packages available
  • We can pass an array of path to find out the packages that have been installed in those paths
import pkg_resources

local_packages = pkg_resources.WorkingSet(["/home/ubuntu/<user>/pythonlib"])
local_package_list = [i.key for i in local_packages]

global_packages = pkg_resources.WorkingSet()
global_package_list = [i.key for i in global_packages] 

We can identify what packages are available globally to use and what need to be installed. Any package that needs to be installed will be installed in /home/ubuntu/<user>/pythonlib and that way it will not clash with the global packages and also with other users

Installing packages programmatically 

This is one of the very easy tasks to do

import subprocess
import sys

subprocess.check_call([sys.executable, "-m", "pip", "install", "-t", "/user/pythonlib/path", "packageIneed"])

Ensure the code uses the packages available in /user/pythonlib/path 

This is one of the tricky things to do. What I have done in my platform is to ensure that the local pythonlib path is a subdirectory of the separate user folder structure

Consider the following directory structure

/home/ubuntu/user1 -> this is an exclusive path for user1
    /usermodules -> this directory has all the python user modules
        /localpackages -> this is the directory where all the local packages are installed
            __init__.py -> empty file to ensure that localpackages is considered as a package

The trick is to modify the import statements just before execution so that they refer to the localpackages directory

Following is the example where yfinance & pandas are locally installed

Original Code:
import yfinance 
from pandas import read_csv

Modified Code:

from .localpackages import yfinance
from .localpackages.pandas import read_csv
I am have this stopgap program to do the above. I am still looking for a clean solution to do this using lexical parsing

pyobj = open(filename,"r")
pycode = pyobj.read()
pycodelines = pycode.splitlines()
pyobj.close()
packages = ["yfinance", "pandas"]

pyobj = open(filename,"w")
for line in pycodelines:
    token = line.split(" ")
    stripped_line = line.lstrip(' ')
    stripped_token = stripped_line.split(" ")
    if stripped_token[0] in ["import", "from"]:
        if stripped_token[1] in packages:
            new_token = []
            from_set = False
            import_set = False
            for t in token:
                if t == "from":
                    from_set = True
                    new_token.append(t)
                    continue
                if t == "import" and not from_set:
                    t = "from .localmodules " + t
                    new_token.append(t)
                    continue
                if from_set:
                    if t in packages:
                        t = ".localmodules." + t
                        new_token.append(t)
                        continue
                new_token.append(t)
            new_line = " ".join(new_token)
        else:
            new_line = line
    else:
        new_line = line
    print(new_line)
    pyobj.write(new_line + "\n")
    
pyobj.close()

Tuesday, June 23, 2020

Authentication & Authorisation with AWS Cognito - Part 1 Authentication

We have setup our Kubernetes cluster; we have setup the dashboard on how to monitor them; we have implemented a few services;
What next? 

We need to build an application that will use these services - Let us take the first step to build an application - Authentication and Authorisation - The 2A's

Most times, while building an Application we concentrate a lot on the features of the application that the 2A's are an after thought.

I have currently started out with a problem statement and working on an application; I have come to a point where I can build on the function and build a product out of it

However, I have taken a pause! I am focussing on the 2A's as they are very important; they constitute two of the most important things as we start building any product. If the security around the product is not good; in this day and age, it just will not fly

In olden days we used to have all these fancy ways on how do I build the database for user management and spend a lot of time on the hashing of the password - lot has changed. Thanks to AWS Cognito; this helps you weave both the aspects of Authentication and Authorisation into your application.

In this example I am illustrating using Python & Flask - you can use the same concepts along with the nuances of the language you chose to develop your application with.

As I wade through AWS Cognito I find a lot of information scattered all over the place. I am trying to collate all that I found in one single place.

Let us deal with Part 1 - Authentication and follow it up with Authorisation.

There are two components of AWS Cognito - User Pool & Identity Pool
User Pool deals with Authentication and Identity pools deals with Authorisation

Let us consider the following features as part of User Management;
  • Allow the user to signup and gets a confirmation email 
  • The user forgets to confirm and is sent a new confirmation
  • The user confirms his account
  • User has forgotten the password and an email is sent to the user to reset the password
  • User used the code provided to set a new password
  • The user now uses the new password and performs a successful login to the system
Boto3 has the following functions to cater to all of the about with the client "cognito-idp"
  • sign_up
  • resend_verification
  • confirm_sign_up
  • forgot_password
  • confirm_forgot_password
  • initiate_auth
How does this work

The entire component is multi-layered. Following are the component layers
  • User Pools
  • App Client
  • Create an App client 

Before we dive deep into these functions - Let us create a user pool
The user pool has a set of standard attributes 
 given namemiddle name family name name 
 namenick name preferred username  email
 addressbirthdate phone number gender    
 localepicture profile zoneinfo 
 updated at website  

At the time of pool creation we can set any or all of these as required - you cannot change them afterwards

  • We can also define custom fields as needed
  • Specify password characterstics
  • Specify if users can signup or added by an administrator
  • Specify Multi-Factor Authentication
  • Password recovery methods
  • Customise email & SMS messages
  • Define Tags 
  • Indicate remembering user devices
  • Define workflow at various stages of creation of a user
There are a lot more you can do including federation of identities

Once you define the user pool define an App Client for this pool, the app client has options on how authentication can be performed by the app client

You will need the user pool id and the app client id within your code to perform the authentication, if you enable to generate client secret you will need this too

Let us now go back to the functionality we defined as part of authentication initially.

Following choices made for the example
  • Required Attributes
    • email
    • name
    • phone number
  • username
    • email
  • secret generated for app client
Function to create security hash
def get_secret_hash(username):
    msg = username + CLIENT_ID
    dig = hmac.new(str(CLIENT_SECRET).encode('utf-8'), 
        msg = str(msg).encode('utf-8'), digestmod=hashlib.sha256).digest()
    securityHash = base64.b64encode(dig).decode()
    return securityHash
The above function is called by all the boto3 api's when the App Client - generate secret is chosen, we get the has with the username - in our examples we will use email as the username


User Signup
        response = cidpClient.sign_up(
            ClientId=CLIENT_ID,
            SecretHash=get_secret_hash(email),
            Username=email,
            Password=password, 
            UserAttributes=[
                {
                    'Name': "name",
                    'Value': name
                },
                {
                    'Name': "email",
                    'Value': email
                },
                {
                    'Name': "phone_number",
                    'Value': phone
                }
            ],
            ValidationData=[
                {
                    'Name': "email",
                    'Value': email
                }
            ])
Pass in parameters which you have marked as required and also any user defined parameters

Verify Signup
        response = cidpClient.confirm_sign_up(
            ClientId=CLIENT_ID,
            SecretHash=get_secret_hash(email),
            Username=email,
            ConfirmationCode=code,
            ForceAliasCreation=False,
        )
Verification Reminder
        response = cidpClient.resend_confirmation_code(
            ClientId=CLIENT_ID,
            Username=email,
            SecretHash=get_secret_hash(email),
        )
Forgot Password
        response = cidpClient.forgot_password(
            ClientId=CLIENT_ID,
            Username=email,
            SecretHash=get_secret_hash(email),
        )
Confirm Forgot Password
        response = cidpClient.confirm_forgot_password(
            ClientId=CLIENT_ID,
            SecretHash=get_secret_hash(email),
            Username=email,
            ConfirmationCode=code,
            Password=password,
           )
Login
      response = cidpClient.initiate_auth(
                 ClientId=CLIENT_ID,
                 AuthFlow='USER_PASSWORD_AUTH',
                 AuthParameters={
                     'USERNAME': email,
                     'SECRET_HASH': get_secret_hash(email),
                     'PASSWORD': password,
                  })
For each of these functions handle the corresponding exceptions to ensure that we handle all the exceptions.

Next section we will discussion authorisation and refreshing the tokens





    



Wednesday, June 10, 2020

Deploy your first Kubernetes Service

We have setup our own Kubernetes cluster and we have installed the Kubernetes Dashboard. Let us try and install a service.

Nope, we are not going to do Hello World - I am bored with it

I created a small flask application which allows me to Browse S3 Buckets based on permission in IAM roles.

We will split this into two parts 
  1. Create a docker container and have this app running
  2. Create the same as a service in Kubernetes
Part 1 - Create a docker container and have an (Python Flask) app running inside the container 

Prerequisite:
  • Have Python Flask app which is running 
  • Install Docker
Steps to create the docker container
  • Create a folder for the app
    mkdir s3browser
  • Create the requirements file requirements.txt
    • List all the packages you need along with the versions
      Flask==1.1.2
      boto3==1.13.6
      werkzeug==0.16.1
      tzlocal==2.1
  • Create the Dockerfile
    • Choose a base image - Python:3.8.2
    • Specify the port for the container - 5000
    • Create the folder structure required in the container
    • Copy the files required from the local folder to the container
    • Specify the entry point - python
    • Specify the command to run - main.py
      FROM python:3.8.2
      WORKDIR /usr/src/app
      COPY requirements.txt ./
      RUN pip install -r requirements.txt
      RUN mkdir bootstrap fonts download upload
      COPY . .
      COPY ./bootstrap ./bootstrap
      COPY ./fonts ./fonts
      ENTRYPOINT ["python3"]
      CMD ["main.py"]
We now have all the required steps to create the docker image
[ec2-user@ip-10-0-1-38 app]$ sudo docker build -t s3browser .
Sending build context to Docker daemon  500.7kB
Step 1/10 : FROM python:3.8.2
3.8.2: Pulling from library/python
90fe46dd8199: Pull complete
35a4f1977689: Pull complete
bbc37f14aded: Pull complete
74e27dc593d4: Pull complete
4352dcff7819: Pull complete
deb569b08de6: Pull complete
98fd06fa8c53: Pull complete
7b9cc4fdefe6: Pull complete
512732f32795: Pull complete
Digest: sha256:8c98602bf4f4b2f9b6bd8def396d5149821c59f8a69e74aea1b5207713a70381
Status: Downloaded newer image for python:3.8.2
 ---> 4f7cd4269fa9
Step 2/10 : WORKDIR /usr/src/app
 ---> Running in 5d9cb6ae01f2
Removing intermediate container 5d9cb6ae01f2
 ---> 3e9513288f2c
Step 3/10 : COPY requirements.txt ./
 ---> 75b3d1cdab8d
Step 4/10 : RUN pip install -r requirements.txt
 ---> Running in 45827297fb12
Collecting Flask==1.1.2
  Downloading Flask-1.1.2-py2.py3-none-any.whl (94 kB)
Collecting boto3==1.13.6
  Downloading boto3-1.13.6-py2.py3-none-any.whl (128 kB)
Collecting werkzeug==0.16.1
  Downloading Werkzeug-0.16.1-py2.py3-none-any.whl (327 kB)
Collecting tzlocal==2.1
  Downloading tzlocal-2.1-py2.py3-none-any.whl (16 kB)
Collecting click>=5.1
  Downloading click-7.1.2-py2.py3-none-any.whl (82 kB)
Collecting itsdangerous>=0.24
  Downloading itsdangerous-1.1.0-py2.py3-none-any.whl (16 kB)
Collecting Jinja2>=2.10.1
  Downloading Jinja2-2.11.2-py2.py3-none-any.whl (125 kB)
Collecting s3transfer<0 .4.0="">=0.3.0
  Downloading s3transfer-0.3.3-py2.py3-none-any.whl (69 kB)
Collecting jmespath<1 .0.0="">=0.7.1
  Downloading jmespath-0.10.0-py2.py3-none-any.whl (24 kB)
Collecting botocore<1 .17.0="">=1.16.6
  Downloading botocore-1.16.19-py2.py3-none-any.whl (6.2 MB)
Collecting pytz
  Downloading pytz-2020.1-py2.py3-none-any.whl (510 kB)
Collecting MarkupSafe>=0.23
  Downloading MarkupSafe-1.1.1-cp38-cp38-manylinux1_x86_64.whl (32 kB)
Collecting python-dateutil<3 .0.0="">=2.1
  Downloading python_dateutil-2.8.1-py2.py3-none-any.whl (227 kB)
Collecting docutils<0 .16="">=0.10
  Downloading docutils-0.15.2-py3-none-any.whl (547 kB)
Collecting urllib3<1 .26="">=1.20; python_version != "3.4"
  Downloading urllib3-1.25.9-py2.py3-none-any.whl (126 kB)
Collecting six>=1.5
  Downloading six-1.15.0-py2.py3-none-any.whl (10 kB)
Installing collected packages: werkzeug, click, itsdangerous, MarkupSafe, Jinja2, Flask, six, python-dateutil, docutils, jmespath, urllib3, botocore, s3transfer, boto3, pytz, tzlocal
Successfully installed Flask-1.1.2 Jinja2-2.11.2 MarkupSafe-1.1.1 boto3-1.13.6 botocore-1.16.19 click-7.1.2 docutils-0.15.2 itsdangerous-1.1.0 jmespath-0.10.0 python-dateutil-2.8.1 pytz-2020.1 s3transfer-0.3.3 six-1.15.0 tzlocal-2.1 urllib3-1.25.9 werkzeug-0.16.1
WARNING: You are using pip version 20.1; however, version 20.1.1 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
Removing intermediate container 45827297fb12
 ---> ac86d48f81e6
Step 5/10 : RUN mkdir bootstrap fonts download upload
 ---> Running in 6fc0e64800a9
Removing intermediate container 6fc0e64800a9
 ---> e5b6c2505c07
Step 6/10 : COPY . .
 ---> 62002b4eefe0
Step 7/10 : COPY ./bootstrap ./bootstrap
 ---> 8909f349d50f
Step 8/10 : COPY ./fonts ./fonts
 ---> 1be3dd0c5262
Step 9/10 : ENTRYPOINT ["python3"]
 ---> Running in bf9e16f104bb
Removing intermediate container bf9e16f104bb
 ---> 4afaaef6b481
Step 10/10 : CMD ["main.py"]
 ---> Running in a5b3ecd0e25d
Removing intermediate container a5b3ecd0e25d
 ---> b63b9f6788b2
Successfully built b63b9f6788b2
Successfully tagged s3browser:latest
Run the docker image we have created
[ec2-user@ip-10-0-1-38 app]$ sudo docker run -d -p 5000:5000 s3browser
4a78858638e5c05e5c5352752ba04e16dbb7914f63ea38f77092060e117ff5b2
[ec2-user@ip-10-0-1-38 app]$ sudo docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                    NAMES
4a78858638e5        s3browser           "python3 main.py"   18 seconds ago      Up 17 seconds       0.0.0.0:5000->5000/tcp   nostalgic_kalam
Open the app in your browser



Part 2 - Host this service in your Kubernetes Cluster

Steps to host service:
  • Create your free docker account - hub.docker.com - user gavihs
  • Create your repository (make it a public repository) - s3browser
  • Tag your image as <Account>/<Repository> - gavihs/s3browser
    [ec2-user@ip-10-0-1-38 app]$ sudo docker images
    REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
    s3browser           latest              6a56635d563d        20 minutes ago      1.01GB
    python              3.8.2               4f7cd4269fa9        4 weeks ago         934MB
    [ec2-user@ip-10-0-1-38 app]$ sudo docker tag 6a56635d563d gavihs/s3browser
    [ec2-user@ip-10-0-1-38 app]$ sudo docker images
    REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
    gavihs/s3browser    latest              6a56635d563d        21 minutes ago      1.01GB
    s3browser           latest              6a56635d563d        21 minutes ago      1.01GB
    python              3.8.2               4f7cd4269fa9        4 weeks ago         934MB
    
  • Push docker image to docker repository
    [ec2-user@ip-10-0-1-38 app]$ sudo docker login -u gavihs
    Password:
    WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
    Configure a credential helper to remove this warning. See
    https://docs.docker.com/engine/reference/commandline/login/#credentials-store
    
    Login Succeeded
    [ec2-user@ip-10-0-1-38 app]$ sudo docker push gavihs/s3browser
    The push refers to repository [docker.io/gavihs/s3browser]
    36e19271ecd9: Pushed
    a925ec3c1db0: Pushed
    15cd094801a0: Pushed
    9307954483b2: Pushed
    20ae7d89eead: Pushed
    d77ded6c9229: Pushed
    dd64728994ac: Pushed
    508c3f3b7a64: Pushed
    7e453511681f: Pushed
    b544d7bb9107: Pushed
    baf481fca4b7: Pushed
    3d3e92e98337: Pushed
    8967306e673e: Pushed
    9794a3b3ed45: Pushed
    5f77a51ade6a: Pushed
    e40d297cf5f8: Pushed
    latest: digest: sha256:9c579687302b0ba1683b263f9b1fab07080231d598325019c1f0afbf7a2031fb size: 3678
  • Create the deployment file - s3browser-deployment.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: s3browser
    spec:
      selector:
        matchLabels:
          run: s3browser
      replicas: 1
      template:
        metadata:
          labels:
            run: s3browser
        spec:
          containers:
          - name: s3browser
            image: gavihs/s3browser:latest
            ports:
            - containerPort: 5000
  • Create the deployment
    ubuntu@ip-10-0-1-79:~/flask-app$ kubectl create -f s3browser-deployment.yaml
    deployment.apps/s3browser created
  • Create the service
    ubuntu@ip-10-0-1-79:~/flask-app$ kubectl expose deployment.apps/s3browser
    service/s3browser exposed
    ubuntu@ip-10-0-1-79:~/flask-app$ kubectl get service
    NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
    kubernetes   ClusterIP   100.64.0.1              443/TCP    9h
    s3browser    ClusterIP   100.67.84.177           5000/TCP   4s
  • Access the service from kubernetes
    https://api-k8-shivag-io-covt8s-1234567890.eu-west-1.elb.amazonaws.com//api/v1/namespaces/default/services/https:s3browser:/proxy/browse?folder=





Friday, May 29, 2020

More control with your k8 clusters

Introduction

We have created our clusters and we have a few services running. How can we control the nodes that are created using kops

What are the AWS Resources Created
  • VPC
  • Subnet
    • 3 for the cluster master and nodes
    • 3 for bastions
  • Routes
    • 1 route for the Bastions
    • 3 routes for each region 
  • Routes
    • Bastion route table have routes 0.0.0.0/0 going to the internet gateway
    • Cluster route table have routes 0.0.0.0/0 going to the NAT gateway
  • Internet Gateway
  • Elastic IP
    • 3 public ip's attached to each of the 3 NAT gateways
  • NAT Gateways
    • 3 NAT Gateways for each region
  • Security Groups
    • 3 security groups for each of Bastion, Master & Nodes
  • Load Balancers
    • 1 for Master
    • 1 for Bastion
  • Launch Configurations
    • 1 each for Master, Node & Bastion
  • Autoscaling Groups
    • 1 each corresponding to Launch configurations
  • Instances
    • As requested for Master, Nodes and Bastion

Advanced Configuration

kops has the following parameters which enables you to control your cluster

  • master-count -> Allows you to specify the number of masters
  • master-size -> Allows you to specify the size of the master machine
  • node-count -> Allows you to specify the number of nodes
  • node-size -> Allows you to specify the size of the node machines
Use spot instances to reduce costs

It is no secret that Spot instances are much cheaper than on-demand instances. You can have the master and/or nodes as spot instances with this hack

Master ==> kops edit ig master-eu-west-1a --name k8.shivag.io --state s3://k8-kops-cluster-state-s3
Nodes ==> kops edit ig nodes --name k8.shivag.io --state s3://k8-kops-cluster-state-s3
Bastion ==> kops edit ig bastions --name k8.shivag.io --state s3://k8-kops-cluster-state-s3

apiVersion: kops.k8s.io/v1alpha2 kind: InstanceGroup metadata: creationTimestamp: "2020-05-18T19:28:46Z" labels: kops.k8s.io/cluster: k8.shivag.io name: master-eu-west-1a spec: image: kope.io/k8s-1.17-debian-stretch-amd64-hvm-ebs-2020-01-17 machineType: t2.medium maxPrice: "0.05" <=== Max cost for spot instance maxSize: 1 minSize: 1 nodeLabels: kops.k8s.io/instancegroup: master-eu-west-1a role: Master subnets: - eu-west-1a
Ensure you have the price corresponding to the instance you are using

Additional Security Group Assignment

The following flags on kops ensure you can have additional security groups assigned to the instance groups

--master-security-groups
--node-security-groups

Note: 
  • Security Groups are attached to VPC; we need to have the VPC Pre-created and specify the VPC during the create phase or we need to make this change later and propagate it; in which case we need to remove the security group and delete the security group before deleting the cluster
Additional Security Group for Bastions

ubuntu@ip-10-0-1-79:~$ kops edit ig bastions --name k8.shivag.io --state s3://shivag.kube-kops-state
piVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2020-05-21T21:40:33Z"
  generation: 1
  labels:
    kops.k8s.io/cluster: k8.shivag.io
  name: bastions
spec:
  additionalSecurityGroups:
  - sg-095b938fcbad614bc
  image: kope.io/k8s-1.17-debian-stretch-amd64-hvm-ebs-2020-01-17
  machineType: t2.micro
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: bastions
  role: Bastion
  subnets:
  - utility-eu-west-1a
  - utility-eu-west-1b
  - utility-eu-west-1c

S3Browser

S3 Browser is a small python flask app. Normal S3 browser utilities available on the web require you to create a AccessKey & SecretID or an User Id. This creates a potential security breach.

This utility relies on IAM role assigned to the AWS resource where this is running. 

Features available:

  • Browse a bucket
  • Upload files into a folder
  • Download files from bucker/folder

Monday, May 18, 2020

Install Kubernetes dashboard

Introduction

It is understood that you already have a successfully created kubernetes cluster. If you are having issues follow instructions from Create Your own Kubernetes cluster on AWS

Ensure the Cluster is up and Running

We have the cluster operational; Validate the cluster as mentioned in the previous article. Once all the nodes and master are up and running we can also test this internally with kubectl
ubuntu@ip-10-0-1-79:~$ kubectl get nodes
NAME                                          STATUS   ROLES    AGE     VERSION
ip-172-20-35-205.eu-west-1.compute.internal   Ready    master   5h42m   v1.18.2
ip-172-20-53-48.eu-west-1.compute.internal    Ready    node     5h41m   v1.18.2
ip-172-20-67-18.eu-west-1.compute.internal    Ready    node     5h41m   v1.18.2
ip-172-20-97-65.eu-west-1.compute.internal    Ready    node     5h40m   v1.18.2
This ensures the master and node are all ready and it also displays the version of Kubernetes in each of the nodes

Create the dashboard components

We need to create the dashboard components. Again, there are multiple places where I found multiple leads. This works with clusters created using kops on AWS. 
ubuntu@ip-10-0-1-79:~$ kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta1/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/kubernetes-metrics-scraper created
We have now successfully created the kuberneter-dashboard and the service is implemented. The following steps help us browse the dashboard

To browse the dashboard we need to perform the following actions
  1. Create a service account
    ubuntu@ip-10-0-1-79:~$ kubectl create serviceaccount dashboard-admin-sa
    serviceaccount/dashboard-admin-sa created
  2. Bind the account created to the cluster-admin role
    ubuntu@ip-10-0-1-79:~$ kubectl create clusterrolebinding dashboard-admin-sa --clusterrole=cluster-admin --serviceaccount=default:dashboard-admin-sa
    clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin-sa created
  3. Find the url how your cluster can be invoked
    ubuntu@ip-10-0-1-79:~$ kubectl cluster-info
    Kubernetes master is running at https://api-k8-shivag-io-covt8s-**********.eu-west-1.elb.amazonaws.com
    KubeDNS is running at https://api-k8-shivag-io-covt8s-**********.eu-west-1.elb.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
  4. Find the Kubernetes cluster password
    ubuntu@ip-10-0-1-79:~$ kops get secrets kube --type secret -oplaintext --name k8.shivag.io --state s3://k8-kops-cluster-state-s3
    FURT*****YPyC*****S6w4*****GPVfd
  5. Find the dashboard-admin-sa user token
    ubuntu@ip-10-0-1-79:~$ kubectl get secret $(kubectl get serviceaccount dashboard-admin-sa -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode
    eyJhb**********1NiIs**********VETHQ**********FVZWH**********cUhjW**********LdGp0**********.eyJp**********Jlcm5**********nZpY2**********Iiwia**********lcy5p**********NlYWN**********W1lc3**********ZWZhd**********iZXJu**********zZXJ2**********VudC9**********eyJhb**********1NiIs**********VETHQ**********FVZWH**********cUhjW**********LdGp0**********.eyJp**********Jlcm5**********nZpY2**********Iiwia**********lcy5p**********NlYWN**********W1lc3**********ZWZhd**********iZXJu**********zZXJ2**********VudC9**********eyJhb**********1NiIs**********VETHQ**********FVZWH**********cUhjW**********LdGp0**********.eyJp**********Jlcm5**********nZpY2**********Iiwia**********lcy5p**********NlYWN**********W1lc3**********ZWZhd**********iZXJu**********zZXJ2**********VudC9**********eyJhb**********1NiIs**********VETHQ**********FVZWH**********cUhjW**********LdGp0**********.eyJp**********Jlcm5**********nZpY2**********Iiwia**********lcy5p**********NlYWN********
  6. The link for the dashboard when installed locally is 
    http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
    substitute the http://localhost:8001 with the cluster url 
    https://api-k8-shivag-io-covt8s-1234567890.eu-west-1.elb.amazonaws.com
    the final url is 
    https://api-k8-shivag-io-covt8s-1234567890.eu-west-1.elb.amazonaws.com//api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
Now we have the url for the dashboard; we have two password prompts 
  • First for the kubernetes cluster where the user is "admin" -> password from Step 4

  • Next the token -> Token from Step 5






Create your own Kubernetes Cluster on AWS

Motivation
I have been looking into Kubernetes and wanted to start with defining my own cluster.

Why this blog?
I searched on a lot of ways to implement the k8 clusters. I started out with kubectl. everything went without a hitch until I had to access EBS volumes on AWS. I stumbled into a lot of issues; posting my issues on stackoverflow.com, there was a comment to implement using kops. I searched on installing the cluster using kops and found some of the information available cryptic. I have tried to demystify some of the complexities in using kops.

It is very easy to manage your clusters once you understand the basics of kops

Why my own cluster?
AWS, Google & Azure provide interfaces with Kubernetes - they simplify everything for you very much; while using these services is easy, you miss out on some of the complexities and the understanding that comes with implementing the cluster by yourself.

Versions used
  • kops - 1.16.2 (at the time of writing this blog version 1.17 is available)
  • kubernetes - 1.18.2
Background
Kops is an utility which enables you to create and maintain Kubernetes clusters. We are using it on AWS and let us launch kops from an EC2 machine.

There are two ways to go about this
  • Define an IAM user who has the privileges to create/modify/delete the resources and use this user
  • Attach an IAM role with the privileges to create/modify/delete the resources
I am choosing the 2nd option.

Building Blocks

  1. Create a VPC in region eu-west-1 - Name k8-kops-vpc
    Ensure DHCP resolution is enabled and a DHCP option set is associated
  2. Create 3 subnets for zones a,b,c - k8-kops-subnet-1a, k8-ops-subnet-1b, k8-kops-subnet-1c
  3. Create a private hosted zone shivag.io associate with the VPC k8-lops-vpc
  4. Create a internet gateway - k8-kops-igw
  5. Route table k8-kops-rtb
  6. Route in route table for destination 0.0.0.0/0 to internet gateway k8-kops-igw
  7. Associate the 3 subnets k8-kops-subnet-1a, k8-ops-subnet-1b, k8-kops-subnet-1c to the route table k8-kops-rtb
  8. Create a security group associated with VPC k8-kops-sg
  9. Create inbound rules to enable
    1. All traffic with your current machine - to do any interaction with the ec2 machines
    2. All traffic with the security group k8-kops-sg
  10. Create an S3 bucket k8-kops-cluster-s3
  11. Create an IAM role k8-kops-role
  12. Add the following inline policies
    1. AllowIAM on Resource "*"
      "iam:CreateGroup",
      "iam:ListRoles",
      "iam:ListRolePolicies",
      "iam:AttachGroupPolicy",
      "iam:CreateUser",
      "iam:AddUserToGroup",
      "iam:ListInstanceProfiles",
      "iam:GetInstanceProfile",
      "iam:CreateInstanceProfile",
      "iam:GetRole",
      "iam:GetRolePolicy",
      "iam:PutRolePolicy",
      "iam:CreateRole",
      "iam:AddRoleToInstanceProfile",
      "iam:CreateServiceLinkedRole",
      "iam:DeleteRole",
      "iam:DeleteInstanceProfile",
      "iam:RemoveRoleFromInstanceProfile",
      "iam:DeleteRolePolicy",
      "iam:PassRole"
    2. AllowS3 on
      Resource 
          arn:aws:s3:::k8-kops-cluster-s3
          arn:aws:s3:::k8-kops-cluster-state-s3
          arn:aws:s3:::k8-kops-cluster-s3/*
          arn:aws:s3:::k8-kops-cluster-state-s3/*
      Actions
          "s3:*"
    3. AllowEc2 on Resource "*"
      "ec2:DescribeAvailabilityZones",
      "ec2:DescribeKeyPairs",
      "ec2:DescribeSecurityGroups",
      "ec2:DescribeVolumes",
      "ec2:DescribeDhcpOptions",
      "ec2:DescribeInternetGateways",
      "ec2:DescribeRouteTables",
      "ec2:DescribeSubnets",
      "ec2:DescribeVpcs",
      "ec2:DescribeVpcAttribute",
      "ec2:DescribeTags",
      "ec2:DescribeImages",
      "ec2:DescribeNatGateways",
      "ec2:DescribeAddresses",
      "ec2:DescribeRegion",
      "ec2:CreateVpc",
      "ec2:CreateDhcpOptions",
      "ec2:CreateRouteTable",
      "ec2:CreateRoute",
      "ec2:CreateSubnet",
      "ec2:CreateSecurityGroup",
      "ec2:ModifyVpcAttribute",
      "ec2:ImportKeyPair",
      "ec2:AssociateDhcpOptions",
      "ec2:AuthorizeSecurityGroupEgress",
      "ec2:AuthorizeSecurityGroupIngress",
      "ec2:CreateVolume",
      "ec2:CreateTags",
      "ec2:AssociateRouteTable",
      "ec2:AllocateAddress",
      "ec2:CreateInternetGateway",
      "ec2:CreateNatGateway",
      "ec2:AttachInternetGateway",
      "ec2:AttachVolume",
      "ec2:DeleteKeyPair",
      "ec2:DeleteDhcpOptions",
      "ec2:DeleteRouteTable",
      "ec2:DeleteNatGateway",
      "ec2:DeleteInternetGateway",
      "ec2:RevokeSecurityGroupIngress",
      "ec2:RevokeSecurityGroupEgress",
      "ec2:DeleteSubnet",
      "ec2:DeleteSecurityGroup",
      "ec2:DeleteVolume",
      "ec2:TerminateInstances",
      "ec2:DeleteVpc",
      "ec2:DetachInternetGateway",
      "ec2:ReleaseAddress"
    4. AllowAutoscaling on Resource "*"
      "autoscaling:DescribeTags",
      "autoscaling:DescribeLaunchConfigurations",
      "autoscaling:CreateLaunchConfiguration",
      "autoscaling:DescribeAutoScalingGroups",
      "autoscaling:CreateAutoScalingGroup",
      "autoscaling:AttachLoadBalancers",
      "autoscaling:EnableMetricsCollection",
      "autoscaling:UpdateAutoScalingGroup",
      "autoscaling:DeleteAutoscalingGroup",
      "autoscaling:DeleteLaunchConfiguration"
    5. AllowELB on Resource "*"
      "elasticloadbalancing:DescribeLoadBalancerAttributes",
      "elasticloadbalancing:DescribeLoadBalancers",
      "elasticloadbalancing:DescribeTargetGroups"
      "elasticloadbalancing:ModifyLoadBalancerAttributes",
      "elasticloadbalancing:ConfigureHealthCheck",
      "elasticloadbalancing:CreateLoadBalancer",
      "elasticloadbalancing:DescribeTags",
      "elasticloadbalancing:AddTags",
      "elasticloadbalancing:DeleteTags",
      "elasticloadbalancing:DeleteLoadBalancer"
    6. AllowRoute53 on Resource "*"
      "route53:GetHostedZone",
      "route53:ListHostedZones",
      "route53:ListResourceRecordSets",
      "route53:ListHostedZonesByName",
      "route53:AssociateVPCWithHostedZone",
      "route53:ChangeResourceRecordSets"
  13. Create an EC2 with the following parameters - This ec2 will be the kops instance where you can manage the Kubernetes cluster
    1. AMI : Ubuntu
    2. VPC : k8-kops-vpc
    3. Subnet: any of k8-kops-subnet-1a, k8-ops-subnet-1b, k8-kops-subnet-1c
    4. Security Group : k8-kops-sg
    5. IAM : k8-kops-role
    6. Instance Type : t2.micro
    7. User Data have the following code

      #!/bin/bash

      #
      # Update the repository and upgrade all the packages
      #
      apt-get update
      apt-get upgrade -y

      #
      # Install awscli
      #
      apt-get install -y awscli

      #
      # Set the aws region variables in the profile for every user logging in
      #
      echo "***** Set aws default region"
      cat > /etc/profile.d/aws-default.sh <export AWS_DEFAULT_REGION=eu-west-1
      export AWS_REGION=eu-west-1
      EOF

      #
      # Install packages required
      #
      apt-get install apt-transport-https

      #
      # add the pgp key for kubernetes repository
      #
      curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

      #
      # add the kubernetes repository location
      #
      cat < /etc/apt/sources.list.d/kubernetes.list
      deb http://apt.kubernetes.io/ kubernetes-xenial main
      EOF
      apt-get update

      #
      #Install kubectl
      #
      apt-get install kubectl

      #
      # Download the latest kops executable
      #
      curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
      chmod +x kops-linux-amd64

      #
      # Move the latest version to the path
      #
      sudo mv kops-linux-amd64 /usr/local/bin/kops

Now we have built an EC2 Instance in AWS which has all the permissions and the software that is needed to manage you kubernetes cluster using kops

All we need to do is invoke the kops commands as needed

Following are the parameters we pass for kops
  • Name : k8.shivag.io # specifies the name of the cluster
  • Zones : eu-west-1a, eu-west-1b, eu-west-1c # indicates the zones the cluster will be having the nodes in
  • State : s3://k8-kops-cluster-state-s3 # the s3 bucket where the configuration will be stored
  • Kubernetes version : 1.18.2 # version of kubernetes to use
  • Networking : calio # the networking that will be used within the pods
  • dns-zone : shivag.io # the hosted zone where we will have route 53 entries
  • Bastion : "true" # indicates creation of an additional instance where we can login and manage the cluster
Create Cluster
kops create cluster \
--name k8.shivag.io \
--zones eu-west-1a,eu-west-1b,eu-west-1c \
--state s3://k8-kops-cluster-state-s3 \
--kubernetes-version 1.18.2 \
--master-count 1 \
--master-size=t2.medium \
--node-count 3 \
--node-size=t2.micro \
--cloud=aws \
--v=5 \
--networking calico \
--dns-zone=shivag.io \
--topology private \
--bastion="true" \
--dns private
This command creates the configuration on the s3 state folder

Update Cluster
kops update cluster \
--name k8.shivag.io \
--state s3://k8-kops-cluster-state-s3 \
--v=5 \
--yes

This command creates all the resources in AWS 

Validate Cluster
kops validate cluster \
--name k8.shivag.io \
--state s3://k8-kops-cluster-state-s3
Delete Cluster
kops delete cluster \
--name k8.shivag.io \
--state s3://k8-kops-cluster-state-s3 \
--yes

This command deletes all the resources and the configuration from s3 state folder