Continuous Delivery on AWS Cloud

The name of this project is continuous delivery on AWS.

Yes, everything will be on AWS.

S and no, we're not going to use ec2 two instances here.

We're going to use PaaS and SAAS services.

So when should you do such kind of project?

What is the requirement?

Let's understand those things first.

Okay, So let's say there is a product development team working in an agile environment and they're going to make regular code changes to build the product. Now let's say that this developer team is running short on operations. They don't have much or they don't have system admins or cloud engineers.But they are making regular code changes. And these codes changes needs to be build, tested and needs to be deployed.And for deployment, you really need operations team there.So regular code changes means regular packaging of the software and then regular deployment on the servers.And after deployments, you need to conduct further testing like software testing, integration testing.Now, I think you should have already understood the problem, but let's still talk about it.Okay, So we're talking about today's age developer.They're fast, they're quick.They're going to make continuous code changes.And if the code deployment process is manual, it will be time consuming.Plus, developers, testers here are not equipped with the ops knowledge.We don't have operations team or we have a very less operations team.But anyways, these things needs to be done.So what can they do?Well.They can hire some professionals, right?

Operational professionals, system admins, cloud engineers, or even outsource.

We have to understand here, even if they hire or outsource, there is a dependency set.If there are a CI-CD server set like Jenkins Nexis sonar, there'll be regular maintenance offered.If the target are servers or even EC2 instances, you'll have all the overhead of managing the target machines. Also, I'm talking about dev and test environment, not the production environment. So developers are going to make regular code changes that needs to be tested, deployed on servers, then tested further like software testing, and then it can be promoted to production.So the release management will also need a lot of help of operations team.

So what do they do?

Well, instead of depending on the operations team, they can use the platform as a service or software as a service provided by the cloud.AWS has many such services where you don't need to manage virtual machines. Easy, for instance, network storage.So you don't need to manage all those things.

You don't need really an operations team to manage those developers or testers with a little bit of knowledge on the cloud that can use those servers. And moreover, we're talking about pre-prod environment, which we know we can have a disposable environment.

So once you have disposable environments, you can set a CI-CD pipeline which can automatically deploy.

The software and feature any changes to this, to these disposable environments once it is tested or once your release management is completed, you can just delete those environments. You don't need to continuously manage them.

So make a code change, build it, test it deployed, then again, test it.

You do it for every commit and you're going to use the developers and testers.

These people are going to use paas and SAAS services provided by cloud.

So for such kind of projects, we can set up continuous delivery process on cloud.

So once these developers have continuous delivery pipeline on the cloud, they can repair any issues very quickly.So short MTTR.It goes very well with the Agile process.So it will be quick.Like as soon as the code is changed continuously, the process runs and gives you the result.No human intervention over there and no operation team intervention also there.Any fault can be isolated quickly again.

And we are talking about CI-CD pipeline on cloud, but using cloud seed services.So no operations team intervention again.So if you see we are automating plus we are also removing dependencies here.

Now let's see what all the services, the eight of US services that are going to use to set up this

continuous delivery pipeline.

Starting with code commit.

So code commit is going to be our version control system.

Good artifact.

Where are we going to store dependency of Maven?

So Maven will download all the dependency from code artifact service.

Good build service.

We're going to use this service for multiple things.

One, to build our artifact, of course, to run a sonar scanner, to run code analysis, also to run

software testing.

So you have different platforms also in code build Linux platform, Windows platform so you can execute

different kinds of jobs.

Then we're going to use code, deploy service.

This service is also multipurpose.

We can use it to deploy our artifact to various things, like we can store it on S3 bucket, we can

deploy it on a beanstalk environment, we can deploy it on easy to instances.

So in our project, we are going to.

Deployed on Bienstock environment.

I'll get more services so soon.

The cloud we're going to use to do code analysis.

Now, this is not from us.

This is going to be separate service altogether.

So we are going to sign up to our cloud, create an account.

So it's so near Cube on the cloud.

So the cube server on the cloud like that.

Jake Stein.

We're going through code, build selenium software testing.

We're going to we're going to run through code, build service.

So the place we are going to deploy is going to be bienstock, which is going to host our application

ideas we're going to use for the database and code pipeline finally to connect all these things together.

So you see, we are not using any easy.

For instance, we're not going to deploy our artifact to any easy instance.

We're going to deploy it on Bienstock, which is also going to use IDs for database.

So platform as a service for application hosting and also for database.

So what do we do?

We have to remember our objective are to keep our objective, never mind our goals.

We need no or very low or less operations overhead, shot, empty air.

We need fast turnaround time.

So all the automation, we are doing it for that.

So we know we can we can quickly do changes whenever there is a requirement and if there is any issue,

any bug, we can resolve it very quickly.

And less disruptive, of course.

So if you have done our previous sicced project on Jenkins and Sona and all, I would like to make a

quick comparison.

So code commit service instead of GitHub code artifact instead of nexus monotype code build instead

of Jenkins jobs.

So in a cloud instead of solid cube server, a plus code pipeline instead of creating a Jenkins pipeline.

So these are the comparison.

Left hand side are the services that you're going to use in order to have no ops or less operations

overhead.

The left hand side services.

The cloud services.

I'll get a few more comparison beanstalk instances you're going to use instead of Tomcat.

Easy.

Two instances and we're going to use a plus radius instead of managing our database on a VM or two instance.

All right.

It's time to achieve our goals now.

But before we go there.

Architecture of continuous delivery pipeline.

Okay.

So as we have been discussing so far, developers are going to make regular code changes and they're

going to commit.

Once they commit the code, this pipeline gets started for every comet.

So the comet is going to happen on a code commit service, which is going to then trigger the next job

plus code build.

This job is going to do code analysis sonar.

Q It's going to use sonar, scanner is going to use, and it needs any dependency MAVEN dependency that

will be downloaded from code artifact.

So we have to set that as well.

It was called Build Service is going to trigger one more job.

This is going to build the artifact.

And if this needs any dependency, it is going to download it again from a code artifact.

So these build jobs are actually going to run.

Maven and maven dependency will be downloaded from code artifact.

And once the artifact is created, we are going to store it on an S3 bucket.

And then we are going to have one more job deployment job, which is going to deploy our artifact to

Bienstock.

So you'll have an bienstock environment already ready.

So code deploy service is going to automatically deploy artifact to Bienstock environment and Bienstock

will be also connected with the IDs.

So that's the whole flow.

Plus, we're going to have one more job, which is going to be software testing.

We'll execute that also from a code build service.

So it will come after the deploy.

And finally, let's see the flow of execution.

First, we're going to log into ADA account.

We're going to go to code, commit service.

And we're going to create a code commit repository, like we create repository on GitHub.

Then we're going to sync it with our local repository.

So local git repository will be synced with code commit repository.

Then we'll go to code artifact service, and then we're going to create a repository over there where

the MAVEN dependencies will be stored.

And we're going to update settings XML file with this detail of code code artifact repository bombed

out XML file also will update with the repository details.

And we're going to generate a token so our maven can access this code artifact and this token will be

stored in some parameter store.

Okay.

Next is going to be the sonar job setup.

So first we're going to create a sonar cloud account.

We're going to generate a token and few parameters, and then we're going to store these parameters

again in some parameter store.

Then we're going to create a build project which will run our MAVEN job to execute sonar scanner.

And before that, we're going to update a code build roll which will access this parameter store.

So a code job can access the parameters which we are stored over there.

We're going to create notification so we can get notification in for our pipeline any job.

Then we create a build project which is going to build our artifacts.

So we have a few more parameters that we're going to put it in again, Parameter store.

So basically variables, then we're going to create the build project, which will actually generate

the artifact.

Then we'll create a pipeline which will connect all these jobs together and we'll test it.

By making a code change.

So when there is a code change on code commit, then it will trigger this entire pipeline.

And we'll see an artifact uploaded in the S3 bucket.

So till here, it's continuous integration.

We have set up continuous integration.

Now we're going to extend mode further and we'll be setting up continuous delivery pipeline.

So we need an environment where you can deploy your artifact.

So we'll be creating Beanstalk and Radius, the Beanstalk, where we're going to upload our artifact

ideas for database.

We're going to update the security group so it can so Beanstalk instance can access it.

We're going to deploy a database in ideas and then we're going to switch our branches from CI A to C,

D hyphenate.

We're going to use a different branch in this project CD hyphenated us and we're going to update settings,

dot XML file and palm dot XML file in this project.

And then we're going to create another job which is going to build artifact again.

And the build spec file is going to be different for this one.

Okay.

And then we're going to create a deploy job which is going to take our artifact and deploy it to Bienstock.

Or you can see upload to Bienstock.

Then we will have a job which is going to run our software testing selenium scripts.

And which is going to upload our screenshot and all the output to the S3 bucket.

Okay, Then we are going to update our pipeline.

So we already have code commit job test code, job build and store job and we deploy to S3 bucket.

We're going to add build a release job which is going to build the artifact and then deploy to beans

tag.

So there will be a deploy job again.

And then we're going to run selenium test scripts again from a build job and upload all the results

to S3 bucket.

And then finally, we'll test our pipeline.

Okay, so let's not wait further and jump to a console.

Okay.

So first we are going to set up continuous integration pipeline.

If you have already done this in the previous project, then you can skip and directly go to continuous

delivery and continue after that.

Or if you need a revision, you can watch it once again.

Once we set up a continuous integration pipeline, then we'll extend it to continuous delivery pipeline.

So first, we'll set this what you see on the screen right now.

And once this is done.

And then we're going to set up this.

So continuous integration will be extended to continuous delivery pipeline.

So let's get started.