dijous, 18 de juny del 2015

Using docker as jenkins slave orchestrator

After several years working with jenkins and enjoying its plugins, I become sure that maybe there are some other modern CI tools that are more beautiful and more pipeline oriented (see Travis-ci, go-cd and do some google to find some more), but jenkins give me something important: in-house control, thousands of plugins and programatic personalization.

After several months working on the objective of grouping the about 100 standalone jenkins servers that my organization has into a single platform, finally we created a jenkins architecture based on docker slaves that gives us a good project to offer to our teams. Let me explain a little what we needed and how we finally implemented.

Requirements

  • Offer a common CI platform with LDAP integration
  • Each group has unique requirements
  • No one could be able to modify others work
  • Work parallelization has to be on demand and scallable
  • Each group has to be able to start on its own, without human interaction (that is, without asking an administrator to create them the infraestructure)

First approaches

Virtual Machine slaves

As all our teams already know how to work with jenkins and have active projects, this was an important reason to not be disruptive and change to another platform. So the first approach was to deploy jenkins on a virtual server with puppet and create different kind of slaves (linux, windows, macOS) on virtual machines also configured with puppet. This wasn't bad, but as you probably know, it is slow to start VM on demand, although with vSphere plugin it's an easy thing.

Plugins needed

  • vSphere plugin

Problems

  • Potential collisions: giving control of what to have installed on the slaves via puppet to each project implied collisions between them.
  • Security risks: having multithreaded slaves gave us problems of security and some malicious user could delete all user content, included other projects.

Predefined dockers on independent Virtual Machines

After hearing about docker, I didn't hesitate to give it a try, and started to learn and prepared a private registry in our official binary repositories app, artifactory. At first, I saw it difficult to use it for jenkins, as no one on the company had knowledge yet about it but almost all of them knew puppet. When we met the problems explained above, then it seemed a desirable solution, maybe using puppet to configure them. We used coreOS images deployed into VMWare at first, but we detected problems like not starting docker again after an automatic update, or having problems with some npm package and AUFS. So finally we decided to use project atomic images.

Plugins needed

  • Docker plugin

Problems

  • Docker plugin is not as functional as desired (testing 0.9.3), preconfigured slaves work fine, but has errors when trying to automate builds or template adds.
  • Puppet is not the good way of configuring containers, is possible, but difficult using a master. And completely over-engineering for our needs.
  • Configuring containers for each docker daemon is not so easy due to plugin problems, and the scalability is not extremely fluent.
So, we were in the right track, but needed to solve some problems and to improve the architecture, until we discovered a couple of jewels.

Final (or not, we are devops) solution

After seeing how goodness could docker do for us, we started to improve jenkins experience allowing us to divide the portal for each project team and trying to simplify their start. So we used these plugins
  • Authorization plugins: LDAP, Matrix-based security 
  • Folder plugin: The most effective way to isolate jobs between groups 
  • Groovy plugin: When a plugin is not good enough, the best way to modify jenkins itself 
  • DSL plugin: The best possible way to create predefined job structures like wizards. 
  • Plugins to access our other services (Jira, Artifactory, Sonar, etc)
  • Some other useful stuff like builders, archive, etc.
With them we can offer:

  • A wizard to create its own project folders and give them exclusive permissions, as they don't have global permissions.
  • A wizard to create a structure per component (git repo) where it includes 
    • a job to build their unique container
    • preconfigured pipeline with a job for each phase and triggering one after the other
  • Some groovy stuff to fix docker-plugin problems


And so on... I'm finishing the article





Cap comentari:

Publica un comentari a l'entrada