An in-depth comparison of two CI/CD servers: Concourse and Jenkins.

Reading Time: 6 minutes
Concourse vs Jenkins

In this blog we will read about comparison between Jenkins and Concourse .We have divided our requirements up into two categories: development and operations. This post will focus on the development side of the CI server.

Development focused requirements

  1. Is open source.
    • This way we can add custom plugins, fix bugs and understand the vision of the technology.
  2. Supports pipelines as code.
  3. Needs to support all major operating systems.
    • Windows,Linux (at least Debian and Red Hat based distros).
  4. Can be accessed locally as well as server-side.
  5. Can run defined steps in parallel.
    • Building the same source on ten OS’s should happen in parallel.
  6. Can store artifacts from one build step to another.
  7. Can re-trigger a pipeline step, thereby troubleshooting the issue.

close look at Concourse and Jenkins pipelines

Our Jenkins focus lies only in the “new” pipeline jobs, as this is where Jenkins currently puts all its work.

So, every time we say “Jenkins” we mean “Jenkins pipeline”.

Concourse

Concourse is a 100% open source CI/CD system with approximately 100 integration– resource types– to the outside world. Concourse’s principles reduce the risk of switching to and from Concourse, by encouraging practices that decouple your project from your CI’s little details, and keeping all configuration in declarative files that can be checked into version control.

Jenkins

Jenkins is an open-source Continuous Integration, cross-platform tool written in Java. Kohsuke Kawaguchi is the Creator of the Jenkins CI server in 2004, named as Hudson. In 2011 renamed to Jenkins of disputes with Oracle. The tool simplifies the process of integration of changes into the project and delivery of fresh build to users.

Hello world

Concourse

Concourse yml is normally split into multiple files, but for easier reading it has all been in lined below:

jobs:
- name: hello-world
  plan:
  - task: say-hello
    config:
      platform: linux
      image_resource:
        type: docker-image
        source: {repository: ubuntu}
      run:
        path: echo
        args: ["Hello, world!"]

This will simply create a single job, with no resources and no inputs to the job, and echo “hello world”. Triggering it can be done through the web interface, or by using the command line interface called fly: fly -t yourconcourses execute --config tests.yml

Jenkins

Jenkins have two flavors of Pipeline DSL, declarative and scripted pipeline

For both of them, you need to set up a pipeline job, either in the Jenkins UI or through their API. Jenkins will support the pipeline inside the VCS or written in the job definition on Jenkins. When that is done your DSL file is pretty minimal, making hello world almost a one-liner.

  #Declarative pipeline 
pipeline { 
   agent any 
   stages { 
     stage('Build') { 
       steps { 
         echo 'hello world' 
       } 
     } 
   } 
} 
  #Scripted pipeline 
node { 
   stage('Build') { 
           echo 'hello world' 
   } 
}

Conclusion:

Winner: Both

Getting started is straightforward with both systems. If we could not get them up and running, we would not evaluate them.

Operating systems

Concourse

The way you create a Concourse worker or master is by downloading the Concourse binary, and giving it either the “worker” or “master” argument. The process is the same whether your running Windows, Linux or OS X, and this is possible because Concourse is written in GoLang.

However, all the native resources, as well as most of the community resources, run in containers so we have some additional implications to consider. This means that a typical setup with Windows will still have at least one Linux worker to access resources. Furthermore, while Windows is making containers work, Concourse has it as a feature in the future, so a Windows worker always runs its processes in a virtual machine, and separates by folder structure rather than how a container would.

Jenkins

Jenkins nodes run on either bare metal servers or VMs. As long as there is a JVM written for that OS, it runs. How you describe your node environment is up to a third party tool like Ansible, Chef, Puppet, or by you manually installing a server. The way you add nodes to your farm is either through the UI of the master, or by using that makes the nodes contact their master.

Conclusion

Winner: Inconclusive

While there are clear benefits to using containers, it also adds a certain amount of complexity. The JVM is simple to set up and runs everywhere, but is more limited.

Developer initiated work

Concourse

Concourse allows the developers to execute jobs on the server from their own terminal by running:

fly -t myserver execute --config myfirstjob.yml

It will then use whatever local input is given and run the job! This is really helpful because it allows developers to debug their code without having to go through the pipeline.

The only necessity for this is a Concourse server which developers can target that then runs the job in a contained space, on a given slave, matching the specifications.

Jenkins

In this area Jenkins is not even present. We need a server-defined job in order to run something on the build servers. Period.

So you need to make the git round-trip in order to test something:

Git push --> Jenkins pull --> Jenkins build --> Jenkins response --> Repeat

But, with multibranch pipeline , you can change the pipeline according to your needs, and push it to a non-master branch to see the execution.

Conclusion

Winner: Concourse

Concourse is showing the way when it comes to developer initiated pipeline execution. So Jenkins, you need to step up the game here!

Parallelizing your pipeline

Concourse

Support exists for both individual jobs, resources and entire pipelines to run in parallel. When a resource has a change it can trigger all the jobs that depend on it, and if the resource changes again a moment later it will run the job in parallel with the new input.

- name: afterburner 

plan: 
- aggregate: 
  - get: praqma-tap 
  - get: git-phlow #contains the formula update script 
  - get: gp-version 
    passed: [takeoff] 
  - get: phlow-artifact-darwin-s3 
    passed: [takeoff] 
    trigger: true 
  - task: brew-release 
    file: git-phlow/ci/brew/brew.yml 
    on_failure: 
      put: slack-alert 
      params: 
        text: | 
          brew release failed https://concourse.bosh.praqma.cloud/teams/$BUILD_TEAM_NAME/pipelines/$BUILD_PIPELINE_NAME/jobs/$BUILD_JOB_NAME/builds/$BUILD_NAME 
  - put: praqma-tap 
    params: 
      repository: updated-praqma-tap

Jenkins

Jenkins supports this natively in both DSL flavors. It makeS a lightweight job on the master to coordinate the builds. This means that all parallel executions in one stage need to finish before running the next stage.

Below are examples of how a parallel execution list can look in both scripted and declarative.

Jenkinsfile (Scripted Pipeline) 
stage('Test') { 
       parallel linux: { 
       node('linux') { 
              try { 
              sh 'run-tests.sh' 
              } 
              finally { 
              junit '**/target/*.xml' 
              } 
       } 
       }, 
       windows: { 
       node('windows') { 
              try { 
              sh 'run-tests.bat' 
              } 
              finally { 
              junit '**/target/*.xml'
              } 
        } 
        } 
}
Jenkinsfile (Declarative Pipeline) 
pipeline { 
       agent none 
       stages { 
       stage('Test') { 
              parallel { 
              stage('Windows') { 
                     agent { 
                     label "windows" 
                     } 
                     steps { 
                     bat "run-tests.bat" 
                     } 
                     post { 
                     always { 
                             junit "**/TEST-*.xml" 
                     } 
                     } 
               } 
               stage('Linux') { 
                      agent { 
                      label "linux" 
                      } 
                      steps { 
                      sh "run-tests.sh" 
                      } 
                      post { 
                      always { 
                              junit "**/TEST-*.xml" 
                      } 
                      } 
                } 
                } 
         } 
         } 
} 

Conclusion

Winner: Concourse

While not by a big margin, Concourse takes this one.

When running tasks in parallel in Jenkins, you need to have them all done, before branching out again. Concourse has a much looser defition, and can therefore depend on arbitrary conditions.

Retriggering pipelines

Concourse

This is as simple as either going to the web client and clicking the ‘+’ in a job, or running it again from the fly command line interface. It will then attempt to run the same process as previously, with the same inputs (or new ones, if they updated).

Jenkins

You can retrigger a whole pipeline at any time, with the same parameters.

Triggering a stage inside a pipeline is a Jenkins Enterprise only feature. However, Cloudbees are working on a feature for declarative pipeline, but not for the more advanced scripted ones. To me this a very disappointing move.

Conclusion

Winner: Concourse

As Concourse is having each job in a pipeline as an atomic action, retriggering is a no-brainer. Jenkins needs to (re)implement this feature to be on par with concourse.

Final Conclusion:

And the winner is…..

Well, to be frank, there is no such conclusion, because it all depends on what you value most in your setup. Jenkins is the defacto-standard with our customers, but Concourse has merits that make it a worthy competitor.


Written by 

I am an enthusiastic , hard-working and determine girl with strong attention to detail and eager to learn about new technologies.