How to use post-processor in packer

Reading Time: 3 minutes


Packer is an open-source tool. It allows us to create machine images. Packer can use for multiple platforms from a single source template. A common use case is creating images that can use later in cloud infrastructure. We will use post Provisioner in this blog.

In this blog, It can work with JSON as well as HCL language. Hashicorp developed the HCL language. Basically, we will use JSON files to create a machine image and the required package for the image. You can read this blog to get basic knowledge of packer.


In this blog, We will learn how to post processors work. Basically, it runs after the image built successfully by the builder and provisioned. Firstly, this is an optional option. It can be used to upload artifacts, re-package, or more. In this blog, We will see the manifest post processors. The manifest post-processor can store the list of all of the artifacts in JSON format which is produced by the packer during the build. Suppose that your packer template includes multiple builds, this helps you keep track of which output artifacts (files, AMI IDs, docker containers, etc.) correspond to each build. In other words, The manifest post-processor is invoked each time a build completes and updates data in the manifest file when we build an image using packer. Most importantly, Builds are identified with the name and type, it also includes their build time, artifact ID, and file list.



 "post-processors": [


     "type": "manifest"
     "output": "output.json"





  • Type: It is used to define the processor type basically which is manifest.
  • Output: It is used to define the file name where output can store.

Now, We will try to create an AMI. We will create an AWS instance with the same machine image.

Steps to use Post-processors:


First, We need to set the environment variable for the packer with the below command:

export ACCESS_KEY=<access key>
export SECRET_KEY=<secret key>


Create a file with this <file_name.json> name and this file will look like this:

        "access_key": "{{env `ACCESS_KEY`}}",
        "secret_key": "{{env `SECRET_KEY`}}"
    "builders": [
            "type": "amazon-ebs",
            "access_key": "{{user `access_key`}}",
            "secret_key": "{{user `secret_key`}}",
            "region": "us-east-1",
            "ami_name": "blog-ami-nginx",
            "source_ami": "ami-04505e74c0741db8d",
            "instance_type": "t2.micro",
            "ssh_username": "ubuntu"

    "provisioners": [
            "type": "shell",
            "inline":["sudo apt-get update", "sudo apt-get install nginx -y"]

    "post-processors": [
    "type": "manifest",
    "output": "output.json"



In this step, We can just build the image. Follow the below command to build:

packer build <file_name.json>

During the packer builds AMI. It will create an AWS instance with the same machine image which we have to define in the YAML.

After building this AMI, We will see a new file with the same name that we define in the above file for storing the artifacts.


 "builds": [
     "name": "amazon-ebs",
     "builder_type": "amazon-ebs",
     "build_time": 1651124214,
     "files": null,
     "artifact_id": "us-east-1:ami-093ca9f9a90846dbf",
     "packer_run_uuid": "5c98c178-8517-acf6-fa6f-32e3703ff907",
     "custom_data": null
 "last_run_uuid": "5c98c178-8517-acf6-fa6f-32e3703ff907"


I have covered the basics of the post processors. You can follow this documentation to learn more. If you find this blog helpful do like and share it with your friends.


Written by 

Mohd Muzakkir Saifi is a Software Consultant at Knoldus Software. He loves to take deep dives into cloud technologies & different tools. His hobbies are playing gymnastics and traveling.

Leave a Reply