Playing with Jenkinsfile

jenkins logo
Reading Time: 7 minutes

Jenkins – Introduction

Jenkins is an open-source automation server that is used to automate all sorts of tasks related to building, testing, and delivering, or even deploying software. CD pipelines refers to the process of getting the application / software from the source control and delivering it to end-users.
Few of the features of Jenkins that make it the first choice for a user are :

  • Easy installation
  • Plugins
  • Easy configuration
  • Extensible
  • Distributed

Concept of Pipelines in Jenkins

Jenkins Pipeline is a suite of plugins that supports implementing and integrating continuous integration and continuous delivery pipelines into Jenkins.

Every change to the software under development is committed first, in the source control. And these changes go through a long and complex process on their way to final release. This process involves building the software in a reliable and repeatable manner and progressing the built software (in Jenkins, called a “build”) through multiple stages of testing and deployment.

A text file (here, called a Jenkinsfile), contains the definition of a Jenkins pipeline. One can check-in this file into the repository with the code or can create it directly while setting up the pipelines from the UI.

Need for Pipelines :

There is a need for pipelines because,

  • Code: Pipelines are implemented in code and are typically checked into source control, thus, giving teams the ability to edit, review, and iterate upon their delivery pipeline.
  • Durable: Pipelines can survive both planned and unplanned restarts of the Jenkins master.
  • Pausable: Pipelines can optionally stop and wait for human input or approval ( with input directive ) before continuing the pipeline run.
  • Versatile: Pipelines support complex real-world CD requirements, including the ability to fork/join, loop, and perform work in parallel.
  • Extensible: The Pipeline plugin supports custom extensions to its DSL and multiple options for integration with other plugins.

Types of pipelines in Jenkins

  1. Declarative Pipeline
  2. Scriptive Pipeline

Scriptive Pipelines: The scripted pipeline is a traditional way of writing the Jenkins pipeline as code. It strictly uses groovy based syntax. Thus, it provides huge control over the script and can manipulate the flow of the script extensively. As a result, this helps developers to develop advance and complex pipelines as code.

node('TestNode1') {
    stage('Check Output Build') {
        echo 'Helloo, I am in Scriptive Pipeline'
    }
    stage('pwd check Build') {
        pwd tmp: true
    }
    stage('Check user input stage') {
        input 'Enter a value'
    }
}

Flow Control in Scriptive Pipelines :

node {
    stage('Flow Control') {
        if (env.BRANCH_NAME == 'master') {
            echo 'I am in the master branch'
        } else {
            echo 'I am elsewhere'
        }
    }
}

Descriptive pipeline: This is a relatively new feature that supports the pipeline as a code concept. It makes the pipeline code easier to read and write. This code is written in a Jenkinsfile which can be checked into a source control management system such as Git. A declarative pipeline is defined within a block labeled ‘pipeline’ whereas a scripted pipeline is defined within a ‘node’ block.

pipeline {
    agent any
    stages {
        stage ('Check build works') {
            steps {
                echo 'I am in Declarative Pipeline Script.'
            }
        }
        stage ('trying retry') {
            steps {
                retry(5) {
                    // some block
                    mvn clean
                }
            }
        }
    }
}

The key difference between the pipelines would be with respect to their syntaxes and their flexibility.

Terminologies Used:

  • Pipeline – A Pipeline’s code defines the entire build process, including stages for building an application, testing it, and then delivering it.
  • Stage – A stage block defines a distinct subset of tasks performed through the entire Pipeline (e.g. “Build”, “Test” and “Deploy” stages). This block can comprise a single-stage or multiple stages as the task goes.
  • Node – A node is a machine that is part of the Jenkins environment and is capable of executing a Pipeline.
  • Steps – A single task. Fundamentally, a step tells Jenkins what to do at a particular point in time.
  • Agent – This section specifies where the entire Pipeline, or a specific stage, will execute in the Jenkins environment, depending upon the location of the agent section. The section must be defined at the top-level inside the pipeline block, but stage-level usage is optional.
pipeline {
    agent any
    options {
        retry(3)
    }

    stages {
        stage('Compile - Stage 1') {
            steps {
                echo 'Compilation is carried out here.'
                sh 'mvn clean'
            }
        }

        stage('Test - Stage 1') {
            steps {
                echo 'testing is carried out here.'
                sh 'mvn test'
            }
        }

        stage('Package - Stage 1') {
            steps {
                echo 'packaging is carried out here.'
                sh 'mvn package'
            }
        }
    }
}

List of Parameters, to be used with agents:

  1. any – We can execute the pipeline, or a stage, on any available agent.
  2. none – No global agent will be allocated for the entire pipeline to run and each stage section will need to contain its own agent.
  3. label – The pipeline, or a stage, can be executed on an agent available in the Jenkins environment with the provided label.
  4. docker – We can execute the pipeline, or the stage, within the given container
  5. dockerfile – Here, a Dockerfile is specified which is used to create a container in order to run the pipeline.
  6. kubernetes – We can execute the pipeline, or the stage, inside a pod deployed on a Kubernetes cluster.

Points to remember :

post

The post section defines one or more additional steps that are run upon the completion of a Pipeline’s or stage’s run.

  1. always
    Always run the steps in the post section regardless of the completion status of the pipeline’s or stage’s run.
  2. changed
    Only run the steps in the post block if the current Pipeline’s or stage’s run has a different completion status a
    s compared to its previous run.
  3. fixed
    Only run the steps in the post block if the current Pipeline’s or stage’s run is successful and the previous run was a failure or was unstable.
  4. regression
    Only run the steps in the
    post block if the current Pipeline’s or stage’s run’s status is a failure, or is unstable, or aborted and the previous run was successful.
  5. aborted
    Only run the steps in the post block if the current Pipeline’s or stage’s run has an “aborted” status, usually due to the Pipeline being manually aborted.
  6. failure
    Only run the steps in the post block if the current Pipeline’s or stage’s run has a “failed” status.
  7. success
    Only run the steps in the post block if the current Pipeline’s or stage’s run has a “success” status.
  8. unstable
    Only run the steps in the post block if the current Pipeline’s or stage’s run has an unstable status. This is usually caused by test failures, code violations, etc.
  9. unsuccessful
    Only run the steps in the post block if the current Pipeline’s or stage’s run does not have a “success” status.
  10. cleanup
    Only run the steps in the post block after every other post blocks have been evaluated, regardless of the pipeline’s or stage’s status.

For example,

pipeline {
    agent any  
    stages {
        stage('Build - Stage 1') {
            steps {
                echo 'Build is carried out here.'
            }
        }
    }
    post {
        always {
            echo 'I will be executed always, P.S. I am in declarative pipeline\'s always block.'
        }

        changed {
            echo 'I am executed only when there is a change w.r.t the previous build.'
        }

        failure {
            echo 'I am executed only when there is a failure.'
        }

        success {
            echo 'I am executed only when there is a success.'
        }

        fixed {
            echo 'I will be executed only when there is success but an unstable/failed previous build.'
        }

        regression {
            echo 'I am executed only when there is a failure/aborted build now but a stable previous build.'
        }

        aborted {
            echo 'I am executed only when the build is aborted/ stopped manually.'
        }

        cleanup {
            echo 'I will be always executed after any of the above post conditions are completed.'
        }

        unstable {
            echo 'I am executed only when there is an unstable build, usually occuring by test failures, code violations.'
        }

        unsuccessful {
            echo 'I am executed only when there is not a success status.'
        }

    }
}

options

Next, we have is the ‘options’ directive. It allows configuring Pipeline-specific options from within the Pipeline itself. There are multiple options available to use within the “options” directive.

  1. buildDiscarder
    Persist artifacts and console output for the specific number of recent Pipeline runs.
  2. checkoutToSubdirectory
    Perform the automatic source control checkout in a sub-directory of the workspace.
  3. quietPeriod
    Set the quiet period, in seconds, for the Pipeline, overriding the global default.
  4. retry
    On failure, retry the entire Pipeline the specified number of times.
  5. disableConcurrentBuilds
    Disallow concurrent executions of the Pipeline, thus preventing simultaneous access to shared resources, etc.
  6. skipStagesAfterUnstable
    Skip stages once the build status has gone to unstable.
  7. disableResume
    Do not allow the pipeline to resume if the master restarts.
  8. preserveStashes
    Preserve stashes from completed builds, for use with stage restarting.
  9. skipDefaultCheckout
    Skip checking out code from source control by default in the agent directive.
  10. timeout
    Set a timeout period for the Pipeline run, after which Jenkins should abort the Pipeline.

input

The input directive on a stage allows you to prompt for user inputs, using the input step. The stage will pause after any options have been applied. If the input is approved, the stage will then continue. Any parameters provided as part of the input submission will be available in the environment for the rest of the stage. If disapproved, the build is then aborted.

For example,

pipeline {
    agent any
    stages {
        stage('Example') {
            input {
                message "Are you in ?"
                ok "Yes"
                submitter "Aakash"
                parameters {
                    string(name: 'ANSWER', defaultValue: 'I\'m in', description: 'Welcome')
                }
            }
            steps {
                echo "Hey, I saw you answered ${ANSWER} ..."
            }
        }
    }
}

environment

The environment directive specifies a sequence of key-value pairs which can be defined as environment variables for all the steps, or few stage-specific steps. The nature of these variables depends upon the location of the environment directive inside the Pipeline.

In addition to the key-value pairs, the environment directive contains a special function called “credentials()”. This function can be used to access pre-defined credentials in Jenkins. Types of credentials supported in Jenkins are:

  1. secret text
  2. username-password
  3. secret file
  4. ssh with private key

For example,

pipeline {
    agent any
    options {
        quietPeriod(10)
    }
    stages {
    stage('Access credentials')
    {
        environment {
            CREDS = credentials('UsernameWithPassword')
        }
        steps {
            sh 'echo "credentials username password : $CREDS"'
            sh 'echo "username : $CREDS_USR"'
            sh 'echo "password : $CREDS_PSW"'
        }
    }
}

parameters

The parameters directive provides a list of parameters that a user should provide when triggering the Pipeline. Consider the following example,

pipeline {
    agent any 

    parameters {
        string ( name: 'Emp_Name', defaultValue: 'Mr. Sinha', description: 'Name of the employee')
        text ( desc: 'Emp_Description', defaultValue: 'Hard-working employee', description: 'Nature of the employee')
        booleanParam ( name: 'Toggle', defaultValue: true, description: 'Toggle this value')
        choice ( name: 'Emp_Choice', choices: ['Shift 1','Shift 2','Shift 3'], description: 'Choose a shift')
        password ( name: 'PASSWORD', defaultValue: 'SECRET', description: 'Enter the password to employee portal')
    }

    stages {
        stage('Demo of parameters') {
            steps {
                echo "Hello ${params.Emp_Name}"
                echo "Desc : ${params.Emp_Description}"
                echo "Toggle : ${params.Toggle}"
                echo "Your Choice : ${params.Emp_Choice}"
                echo "PASSWORD : ${params.PASSWORD}"
            }
        }
    }
}

triggers

The triggers directive defines the automated ways in which the pipeline should be re-triggered. Refer to the following example, for understanding cron jobs,

pipeline {
    agent any
    triggers {
            cron('H */4 * * *')
        }
        stages {
            stage('Stage -1') {
                    steps {
                        echo 'How you doin !'
                    }
            }
    }
}

tools

They can be declared inside a pipeline block or a stage block. They define the tools to auto-install and put on PATH. Supported tools are maven, Gradle, and JDK.
For example (maven),

pipeline {
    agent any
    tools {
        maven 'apache-maven-3.0.6' 
    }
    stages {
        stage('Stage - 1') {
            steps {
                sh 'mvn --version'
            }
        }
    }
}

parallel

We can run stages in parallel also. The above examples are all demos of sequential stages.
Stages in Declarative Pipeline can have a parallel section containing a list of nested stages to be run in parallel.

For example,

pipeline {
    agent any
    options {
        quietPeriod(10)
    }
    stages {
        stage('Running stages in parallel.'){
            parallel {
                stage('stage-1') {
                steps {
                    echo 'Stage 1 completes.'
                }
            }

        stage('stage-2') {
            steps {
                echo 'Stage 2 completes.'
            }
        }

        stage('stage-3') {
            steps {
                echo 'Stage 3 completes.'
            }
        }
    }
}

Thanks for keeping up…

References

Discover more from Knoldus Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading