Posts

Showing posts from 2022

Kubernetes Topics

Image
  1) Introduction to Docker Containers. 2) Installation of K8s Cluster. 3) Lifecycle of Pod. 4) Deployment and strategies. 5) Cluster Upgrade and worker node management. 6) Labels and Selectors. 7) K8s Service Management. 8) Priority and Preemptions. 9) Network Policies. 10) Volumes. 11) Secrets and ConfigMaps. 12) Horizontal Pod Autoscaling. 13) Namespace and Contexts. 14) Metric Server Configuration. 15) Cluster Access and Role Binding. 16) Web UI Setup. 17) Drain/Taint Nodes. 18) Jobs and Cron. 19) Init and Sidecar containers. 20) ETC Management. 21) Pod Resource Management. 22)  Kubernetes Liveness and Readiness Probes. 23) Ingrees controllers.

AWS DevOps

Image
  1) SDLC Automation. 2) Cloud Formation. 3) Monitoring and Logging. 4) Elastic Beanstalk. 5) Lambda. 6) API Gateway. 7) ECS. 8) ECS - CodePipeline CICD. 9) OpsWorks. 10) CloudTrail & CloudWatch. 11) SSM, Config, Serivce Catalog, Inspector, Trusted Advisor. GuardDuty. 12) ASG.

AWS Solutions Architect Associate

Image
  1) Introduction to AWS. 2) EC2 Fundamentals. 3) EC2 Instance Storage. 4) High Availability and Scalability - ELB + ASG. 5) Route 53. 6) Amazon S3. 7) IAM Roles and Policies. 8) Advanced Amazon S3. 9) Amazon S3 Security. 10) CloudFront and AWS Global Accelerator. 11) Decoupling Applications - SQS, SNS, Kinesis, Active MQ. 12) Networking - VPC 13) CloudWatch and CloudTrail.  14) Data & Analytics. 15) Serverless Solutions. 16) Introduction to RDS, Aurora and ElastiCache.  

Kubernetes - Canary Deployment

Image
  What is canary ? Testing out a new feature or upgrade in production is a stressful process.  You want to roll out changes frequently but without affecting the user experience.  To minimize downtime during this phase, set up canary deployments to streamline the transition. When you add the canary deployment to a Kubernetes cluster, it is managed by a service through selectors and labels. The service routes traffic to the pods that have the specified label.  This allows you to add or remove deployments easily. The amount of traffic that the canary gets corresponds to the number of pods it spins up.  I am going to create a nginx deloyment with the below specification: 1) Nginx image version 1.14.2. 2) Deployment with the labels:  app: nginx and version: "1.0" 3) Replicas 3. 4) Updating HTML content as "I am on POD V.10" apiVersion: apps/v1 kind: Deployment metadata:  name: nginx spec:  selector:   matchLabels:    app: nginx ...

Kubernetes - Configmap and Deployments

Image
  Below post is all about a request from a colleague. 1) There are 20 microservices. 2) All the microservices use a single configmap. 3) Change made to configmap should reflect in the microservices. 4) Microservices can also update the configmap as part of the deployment. In this exampled, I am using nginx and redis deployment sharing a configmap. First creating a configmap from a file: root@masterk8s:/docker# cat cm.txt app1=nginx app2=redis app3=memcached root@masterk8s:/docker# root@masterk8s:/docker# kubectl create configmap app-cm --from-file=cm.txt configmap/app-cm created root@masterk8s:/docker# root@masterk8s:/docker# kubectl get configmap app-cm NAME     DATA   AGE app-cm   1      22s root@masterk8s:/docker# root@masterk8s:/docker# kubectl describe cm app-cm Name:         app-cm Namespace:    default Labels:       <none> Annotations:  <none> Data ==== c...

Kubernetes - Context

Image
  Context: Used to group the access parameters under an easily recognizable name in a kubeconfig file. Its a connection to particular cluster used by kubectl. Context consist of Cluster Name, Namespace and a user and the configuration (CERTS/KEYS) to access the cluster. The term context applies only to the client side. K8s server side does not recognize the term 'context'. Let me start with creating a namespace called "prod" and create a pod inside it. root@masterk8s:~# kubectl create ns prod namespace/prod created root@masterk8s:~# root@masterk8s:~# kubectl run nginx --image=nginx -n prod pod/nginx created root@masterk8s:~# root@masterk8s:~# kubectl get pods -n prod NAME    READY   STATUS    RESTARTS   AGE nginx   1/1     Running   0          21s root@masterk8s:~# When calling the kubectl get pods -n prod, you're retrieving the list of the pods located under the namespace 'prod'. If...

Kuberenetes User Creation

Image
  In this post we will see how to create a user in kuberenetes. There are 3 building blocks needed for user creation in kubernetes: 1) Private Key. 2) CSR (Certificate Signing Request). 3) Certificate. Lets start by creating all 3 first using openssl command. root@masterk8s:/docker# openssl genrsa -out sam.key 2048 Generating RSA private key, 2048 bit long modulus (2 primes) ................+++++ ..........................+++++ e is 65537 (0x010001) root@masterk8s:/docker# Create a CSR with the private key generated and add the user to the group "developer". We will be creating a namespace called "developer" too. root@masterk8s:/docker# openssl req -new -key sam.key -out sam.csr -subj "/CN=sam/O=developer" Next, We need to sign the CSR with the K8s default CA CRT and KEY file located under /etc/kubernetes/pki root@masterk8s:/docker# openssl x509 -req -in sam.csr \ > -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key \ > -CAcreateserial ...

K8s - API Server

Image
  What is a kube API Server? The API server is a component of the Kubernetes control plane that exposes the Kubernetes API. The API server is the front end for the Kubernetes control plane. kube-apiserver is designed to scale horizontally—that is, it scales by deploying more instances.  You can run several instances of kube-apiserver and balance traffic between those instances. root@masterk8s:/# kubectl describe pod -n kube-system kube-apiserver-masterk8s Command:       kube-apiserver       --advertise-address=192.168.163.128       --allow-privileged=true       --authorization-mode=Node,RBAC       --client-ca-file=/etc/kubernetes/pki/ca.crt       --enable-admission-plugins=NodeRestriction       --enable-bootstrap-token-auth=true       --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt       --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client...

Docker Supervisor

Image
Traditionally a Docker container runs a single process when it is launched, for example an Apache daemon or a SSH server daemon. Often though you want to run more than one process in a container.  There are a number of ways you can achieve this ranging from using a simple Bash script as the value of your container’s CMD instruction to installing a process management tool. Supervisor, to manage multiple processes in a container. Using Supervisor allows you to better control, manage, and restart the processes inside the container. Lets say you want to create SSH process running on the docker container. Lets build the image file. 1) Will use ubuntu image. 2) Install SSH and Supervisord. 3) Create folder for SSH and supervisor. 4) Create supervisord.conf and update with SSH startup command. 5) Copy the supervisord.conf to the image. 6) Expose port # 22. 7) Start the supervisord. root@masterk8s:/docker# cat supervisord.conf [supervisord] nodaemon=true [program:sshd] command=/usr/sbin/ss...

Docker Container CPU and RAM Utilization

Image
  In this post will talk about resource allocation to docker containers. Default behavior of docker container has no capacity limit. Its entitled to use the host available resources. root@masterk8s:~# lscpu | grep -i "CPU(s)" CPU(s):                          2 On-line CPU(s) list:             0,1 NUMA node0 CPU(s):               0,1 root@masterk8s:~# The host has 2 CPU's. I am going to create a container using the image progrium/stress. This is the CPU/Memory stress image simulator. root@masterk8s:~# docker run -d --rm progrium/stress -c 8 -t 20s Above command creates a CPU spike on the container for 20 secs. Since we have not set any limit on the container, It keeps using the host CPU. It has reached the maximum of 197% of CPU ( As the host has 2 CPU's its ~ 200%) Lets create a container to use 1 CPU. Now, You can see the docker container is b...

Kubernetes Syllabus

Image
  1) Introduction to Orchestration. 2) Installation of K8s with docker engine. 3) Overview of dockers and containers. 4) Pods, Deployments, Services, Batches, Cron, Daemon set etc. 5) Horizontal Pod Autoscaling and Metric Server. 6) Replication Sets. 7) Service configuration. 8) Tenants and Contexts. 9) User and Role Management. 10) ETCD. 11) Kubernetes Volumes. 12) Readiness and Liveness Probes. 13) Namespace and Quotas. 14) Pod resource configuration. 15) Pod Tolerations, Priorities. 16) Taint and Draining of nodes. 17) Application Deployment Strategies. 18) Secrets and Config Maps. 19) Network Policies. 20) Init and Sidecar containers. 21) Integration of K8s in CI/CD pipeline* 22) Headless service. 23) Admission Controllers. 24) Secret Encryption. 25) Cron and Batch jobs.  26) Interview Q&A.

Kubernetes Horizontal Pod AutoScaling - HPA

Image
  In Kubernetes, a HorizontalPodAutoscaler automatically updates a workload resource (such as a Deployment or StatefulSet), with the aim of automatically scaling the workload to match demand. Horizontal scaling means that the response to increased load is to deploy more Pods.  Horizontal pod autoscaling does not apply to objects that can't be scaled (for example: a DaemonSet.) From the most basic perspective, the HorizontalPodAutoscaler controller operates on the ratio between desired metric value and current metric value: desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )] For example, if the current metric value is 200m, and the desired value is 100m, the number of replicas will be doubled, since 200.0 / 100.0 == 2.0  If the current value is instead 50m, you'll halve the number of replicas, since 50.0 / 100.0 == 0.5.  The control plane skips any scaling action if the ratio is sufficiently close to 1.0 (within a globally-configurabl...