Generic Pipelines Using Docker
  • Version
  • Download 3
  • File Size 3.32 MB
  • File Count 1
  • Create Date October 12, 2020
  • Last Updated October 12, 2020

Generic Pipelines Using Docker

Generic Pipelines Using Docker: The DevOps Guide to Building Reusable, Platform Agnostic CICD Frameworks

DevOps (a clipped compound of “development” and “operations”) is a software engineering culture and practice that aims at unifying software development (Dev) and software operation (Ops). The main characteristic of the DevOps movement is to strongly advocate automation and monitoring at all steps of software construction, from integration, testing, and releasing to deployment and infrastructure management.

That definition pretty much sums up what most people think of when they hear the term DevOps. Not because it perfectly describes what it is, however; it leaves you more confused than before you asked the question “What is DevOps?”. DevOps comes in so many forms it can be hard to nail down exactly what it is. I’ve worked in a lot of different shops in my time and DevOps is always something different at each stop.

For most it involves developers writing code that is checked into source control, which immediately kicks off a build pipeline that deploys your application. That pipeline will perform various steps that may include stages like building, testing, and deploying. From my experience, if you’re just doing these simple steps through automation, you’re way ahead of some. However, for many this is not enough to have your organization be considered as fully embracing DevOps.

You may also want to include things like automated infrastructure creation, security scans, static code analysis, and more. In an ideal situation everything you do in a software shop is stored as code in source control. This includes your application code, infrastructure scripts, database scripts, networking setup, etc. With a push of a button everything your application needs to run can be created on the fly. For a lot shops, this is what it means to truly embrace DevOps.

Some organizations are more mature than others when it comes to DevOps practices. In my experience you’re well on your way to maturity if you’re doing the following items:

  • You deploy your application via an automated pipeline.
  • Once built and tested, application code binaries can be promoted via an automated pipeline.

I realize a lot of people will look at that list and exclaim “WHAT?!”. There are only two items, and your organization may be way beyond those. However, most are not and would see immense improvements by just doing these two things. If you’re one of the folks who thinks this list is crazy small, then count yourself lucky. I know many people who would kill for just these two items.

Automated pipelines in my opinion are the lifeblood of good DevOps practices. They provide so many benefits to both the team of developers as well as the business that relies on their code. A well-crafted pipeline gives you a repeatable process for building, testing, and deploying your application. It can be used to create artifacts that, once built, are simply promoted, ensuring that what makes it to Production has been tested and vetted.

Pipelines can also be a pain point for organizations as well. You may have multiple applications, each written in different languages, and each with their own finicky way of being built. It can be a nightmare at times jumping between technologies, debugging the various stages, and keeping the lights on. Luckily modern technology is helping us get around some of these issues. Technology like Docker has given us an opportunity to standardize our platforms. By utilizing Docker in your pipeline, you can give your developers some peace of mind that if they can build it, so can you.

Combine this with cloud technology like Amazon ECS or Azure Container Service and now we can extend that piece of mind all the way to deployment. If it runs locally in your Docker daemon, it will run in the cloud. This even applies if you’re running an enterprise cloud platform like Nutanix or Dell; if the destination is a container orchestration service you’re golden. Now, consider that .NET Core is open source and can run on Linux, and you’ve pretty much covered all your bases. It’s a great time to be in technology!
This book aims to show how you can use all this technology to simplify your pipelines and make them truly generic. Imagine a single pipeline that deploys all your code regardless of the tech stack it is written in. It’s not a dream; we’ll show you how.

Who This Book is For

This book was written with the DevOps professional in mind who may be struggling with writing and maintaining multiple pipelines. However, it’s also for anyone in technology who is interested in learning about building generic pipelines.

You don’t have to be a DevOps master to get a lot from this book; however, you will benefit more with experience in the following areas:

  • Docker: Being familiar with Docker from creating Dockerfiles, building and running images, etc. We won’t get very deep into Docker, but a working knowledge is key.
  • Bash Scripts: Most of the examples in this book are written in Bash. You should have at least some minimal experience with scripts.
  • CI/CD Platform Experience: You’ll be much better off if you already have experience with a latform like Circle CI or Jenkins. However, throughout the book we’ll walk you through working with these platforms.

As you can see, having some DevOps and pipeline experience will certainly benefit you as you read this book, but it’s not a hard requirement. If you’re a developer of any type, you should have no issues jumping right into the examples in this book. As stated, the future of DevOps and pipelines is everything is code. If you’re comfortable writing code, you’ll be fine.