Evolution of CI/CD pipeline

Back in the days there weren’t any tools for continuous integration or delivery (like Jenkins or Ansible etc). There was only ad-hoc build, manual testing and deployment done by developers from their own computers – and that was suprisingly ok (or they actually didn’t know the power of CI/CD tools yet). Actually it’s fine even today if we are looking CI/CD pipieline from a Lean perspective: to automate only the necessary parts of the value stream to get things done, BUT remembering to measure and improve the processes if needed (by creating a feedback loop and continuously measuring the process in retrospectives).

Sometimes it would be better to make your own tools instead of forcing to use ready made ones. I remember the times before Ansible when I was scripting a bit vague deployment pipeline with Powershell to deploy some PHP and Java artefacts to Windows environment alongside of .NET stuff packed into Nuget packages over TeamCity and Octopus Deploy. I know that was pretty questionable solution and I’m sure I’ll not do that again, but in the moment it felt a good enough (compromise) and I learned pretty much what it is to build things pretty much from the scratch. I found that a complex CI/CD processes could be an indicator of massive technical debt or cultural or process problems – but anyway, it was working and what was the most important thing. Customer got the solution and our team delivered a working software to production many times in a day.

Nowadays CI and CD pipelines can be presented as a code and stored to the VCS instead of managing configurations from the tool’s own GUI. Presenting configuration as a code is good because e.g. Ansible/Docker/Jenkins configurations are as valuable as the code of the main business logic and it’s fair to manage them over the same processes (for example pull requests, show-and-tell sessions , testing etc.). That approach turns the focus on the actual value and leads the change closer to DevOps culture.

Another good thing of modern CI is easier parallel processing set up. Today it’s possible to bring a flexible amount of Jenkins slaves in Docker containers if needed, or execute unit, integration and system tests in parallel by various tools like Gradle, Selenium Grid2 or Robot with pabot (for parallel end-to-end testing). Of course the cloud based CI services like Travis or CircleCI are pretty handy tools when project is suitable for using them.

But still the basic idea for me is to see the CI/CD pipeline as a part of the value stream. It’s all about choosing the right tools for the right case, or crafting the own tools if there’s no suitable one available.

More about configuration as a code:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s