DevOps_Project
Problem Statement:
- Deploy Webserver using the best Principles, Tools, Techniques of DevOps Culture.
Brief Explanation of Solution:
Here I am using Ansible for the configuration management system, Jenkins for CI/CD, Jenkins with Multi-Node Cluster, Build Jobs using DSL Code of Jenkins, for production I am using one of the famous services of AWS,i ,e AWS EKS(Elastic k8s Service).
Prerequisite:
- Linux OS(Redhat, Centos).
- Have an AWS account.
- Created IAM users with AdministratorAccess Permission.
- AWS CLIv2 configured in Linux with IAM user.
- kubectl, eksctl configured in Linux.
- Ansible, Git installed in Linux.
Step-1)Developing website Code:
Note: Here one point, here we need one Dockerfile for building our docker iso for a webserver.
Explanation of Docker: Here my Dockerfile copy all the code in the respective image for building iso for our webserver.
Now The time for DevOpsTeam has come to deploy this code using best practices, tools, techniques, etc.
Step-2)Getting setup ready.
- Here I am using Jenkins on top of docker, for setting up Jenkins , and we have to first install docker in OS, So for the Configuration Management System, I am using Ansible-Roles.
- Here my ansible roles path and all the things are configured.
- To run this code we have to create one playbook and use this role in that playbook.
- Code of Ansible-Playbook:
- hosts: localhost
roles:
- docker_rehl8
- to run this playbook type:
ansible-playbook FILE_NAME.yml
Explanation of Roles and playbook: Here I and first configuring yum using the yum_repository module for docker, and after that, I install docker using the command module, and when all installation completed, I have already created and pull into my hub.docker.com of Jenkins iso. Only just I need to pull in my os, so I used a module for pulling my Jenkins iso from hub.docker.com
Dockerfile of Jenkins:
FROM centos
RUN dnf install wget -y
RUN wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins.io/redhat-stable/jenkins.repo
RUN rpm --import http://pkg.jenkins.io/redhat-stable/jenkins.io.key
RUN dnf install java-11-openjdk.x86_64 jenkins -y
RUN dnf install -y openssh-server
RUN dnf install net-tools -y
RUN dnf install git -y
RUN dnf install httpd -y
RUN dnf install which -y
RUN dnf install python36 -y
RUN pip3 install ansible
RUN echo "jenkins ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
RUN echo "java -jar /usr/lib/jenkins/jenkins.war &" >> /root/.bashrc
RUN bash
Output:
Step-3)Jenkins Setup.
- Only just we need to start Jenkins container, type:
docker run -it --name jenkins1 -p 8082:8080 -v /root/jenkinsdata/:/root/.jenkins rootritesh/jenkins:v1
- Here they starting container and exposing port 8080 of Jenkins, for making data persistent I mount Jenkins location on /root/jenkinsdata.
- Now configuring Jenkins:
- Copy-paste the password from the terminal.
- No need to install plugins right now, close it.
Step-4)Installing plugins in Jenkins.
- Go to Manage Jenkins>Plugin Manager.
- install these plugin > GitHub, job DSL, pipeline.
- Click on install without restart
Step-5)Configure Jenkins
- Here I using Jenkins Multi-Node Cluster, For configuring it Goto configure> Node> New Node> Add a new Node.
- Here I am using two nodes one for Developer job building and another for DevOps team.
- Now in the AWS launch one ec2 and configure it:
$sudo su -- root
$yum install git -y
#yum remove java-1.7.0-openjdk -y
$yum install java-1.8.0 -y
$yum install docker
$docker login (type username and password)
$docker pull centos.
$docker pull rootritesh/webserver-php
- We need this configuring for a dev job.
- For the first Node, I am using AWS EC2.
- Give the credential > Select kind > SSH Username with private key > give username (ec2-user) > Private key > Add it
- Save it.
- The output of the AWS Node.
- Second Node which my Base Os of Linux where all things are configured like=> AWS CLI, eksctl, etc.
- For credentials of Base, OS Linux, Connect vi ssh >provide username and password
- Save it.
- The output of the Linux Node.
- Now after that, we need to configure our Email Notification in Jenkins.
- When our Job fails Jenkins Automatically sends the email to the team.
- Go to Manage Jenkins > Configure System.
- At the bottom there option E-mail Notification.
- It is working, test configured successfully.
Step-5)Jenkins DSL JOB
- Make one freestyle job name jobdsl.
- Configure it using the following groovy DSL code.
Code:
job('devjob') {
scm {
github('rootritesh/DevOps_Project')
}
triggers{
scm("* * * * *")
}label('aws')steps {
shell('''
sudo su -- root
pwd
sudo docker build -t rootritesh/myproserver:v8 .
sudo docker push rootritesh/myproserver:v8''')}
}job('prod') {
label('kube')steps {
shell('''
wget https://raw.githubusercontent.com/rootritesh/AWS-EKS-TASK/master/cluster.yaml
eksctl create cluster -f cluster.yaml
sleep 5
kubectl get pods
if kubectl get deployments | grep myweb
then
echo "Webserver is running"
kubectl describe deployment myweb
else
kubectl create deployment myweb --image=rootritesh/myproserver:v8
kubectl scale deployment myweb --replicas=5
kubectl expose deployment myweb --port=80 --type=LoadBalancer
fi''')}
}job('status') {
label('kube')
triggers{
scm("* * * * *")
}steps {
shell('''
if kubectl get deployment | grep myweb
then
echo "Deployment is There"
else
exit 1
fi
''')
}
publishers {
mailer('rootritesh34@gmail.com', false)
}
}
Explanation: Here I am using Jenkins DSL Groovy language for configuring jobs.
Step-6) Configure Jenkins Pipeline Job.
- Make one Pipeline job name MYAL.
- Configure it, using the following code.
Code:
pipeline {
agent anystages {
stage('Dev') {
steps {
echo 'building dev jobs'
build job: 'devjob'
}
}stage('Prod') {
steps {
echo 'building production world'
build job: 'prod'
}
}
stage('checking status') {
steps {
echo 'status'
build job: 'status'
}
}
}
}
Explanation: This Pipeline job is running one by one job which is configured
Step-7)Run Job DSL
- Goto jobdsl job and build it.
- Build it.
Output of jobdsl
- Here our job build successfully
The output of Jobs:
- Here our jobs build successfully.
- Name of jobs:
- $devjob.
- $prod(Production)
- $status.
Step-8) Run the Pipeline Job
- Now go and build the MYAL pipeline job.
The output of the Pipeline:
Explanation of jobdsl DSL groovy Code:
Here I am making three jobs one for devjobs, second for production, third for status checking of deployment.
Explanation of devjob:
When the devjob build by the pipeline, what it does, first it connect to the Node of AWS On there they download the Developer Code prepare the docker iso, build it docker iso for the production world from Dockerfile provided by the GitHub link. After that building, they push to the hub.docker.com for pulling.
Output of devjob:
- Here our code has been downloaded successfully.
- Here our Docker image built successfully.
- Here our ISO has been pushed into hub.docker.com
Explanation of prod job:
After the devjob prod job runs, first it connect to Linux Node for my AWS EKS(Elastic K8s Cluster) setup, it downloads the cluster code from the GitHub code, In the cluster, it is using spot instance, and it creates the EKS cluster by using eksctl command, after this cluster setup, it creates a deployment for our production, it creates one deployment name myweb and uses an image which is built by the devjob, it also scales it, and Here for accessing my Website I am using service LoadBalancer in EKS, which is using AWS Loadbalancer.
Output of prod job:
- Here EKS cluster is ready.
- Here our Nodegroup has been configured.
- Here U can see my all deployments running fine.
- AWS Loadbalancer created by the service of K8s, for accessing our Website.
Explanation of status job:
this job goes and checks whether this deployment is there or not, if not it sends the email to the respective team.
Output of status job:
- It is the test email send by Jenkins, it means our Jenkins E-mail job working fine.
FINAL Output.
- Copy Loadbalancer DNS name into the browser for website output.
- Here our Website comes, running on EKS cluster, using Loadbalancer.
Note: For monitoring these Node Groups We can use monitoring services provided by the AWS, but it is not recommended here we can use Prometheus and Grafana for metrics, and Splunk for logs. for more, we can also set alarms.
Final Explanation:
Here I completed this project using END-TO-END Automation, for configuration management I use Ansible. My Devjob running on AWS EC2 using Jenkins multi Node Cluster, and my Production is running on one of the powerful services of AWS i,e AWS EKS provide the best resources, high availability, etc. and at last by status is checking whether my deployment is there or not, if not it sends the email to the teams.