Triggers in Apache Airflow

Reading Time: 3 minutes

The easiest concept to understand in Airflow is Trigger rules. 
Let’s understand the Trigger Rules in Apache Airflow.

Why Trigger Rules?

By default, Airflow will wait for all the parent/upstream tasks for successful completion before it runs that task. However, this is just the default behavior of the Airflow, and you can control it using the trigger_rule the argument to a Task. 

Basically, a trigger_rule defines why a task gets triggered, on that condition. although the conventional advancement workflow behavior is to trigger tasks when all their directly upstream tasks have succeeded, Airflow allows for more complex dependency settings.

All operators have the trigger_rule argument that defines the rule by which the generated task gets triggered. 

  • The options for trigger_rule are:
    • all_success
    • all_failed
    • all_done
    • one_failed
    • one_success
    • none_failed
    • and many more…

1. all_success

  • This is the default trigger
  • Triggered when parents/upstream tasks succeed
  • Syntex: trigger_rule='all_success'

  • Example
    • Here, task_2 and task_3 get triggered after successful completion of task_1
  • When one of the parent task skips, then the next task automatically skipped.
  • Example
    • Here, task_4 get skipped after skipping task_3

2. all_failed

  • Triggered when all parents/upstream tasks are in a failed or upstream_failed state
  • Used when you want to do cleaning or something more complex to skip callback.
  • Syntex: trigger_rule='all_failed'

  • Example
    • Here, task_4 get triggered after the failure of task_2 and task_3

3. all_done

  • Triggered when all parents/upstream tasks executed
  • It doesn’t depend upon their state of execution (failure, skip, success)
  • Used for the task that we always want to execute
  • Syntex: trigger_rule='all_done'

  • Example
    • Here, task_4 get triggered after execution of previous tasks (task_2 and task_3) regardless of their state

4. one_failed

  • Triggered as soon as at least one parent/upstream task failed
  • It does not wait for the execution of all parents
  • Used for long tasks and wants to execute other task if one fails
  • Syntex: trigger_rule='one_failed'

  • Example
    • Here, task_4 get triggered after the failure of task_2, regardless of the state of task_3

5. one_success

  • Triggered as soon as at least one parent/upstream task gets succeeds
  • It does not wait for the execution of all parents
  • It is opposite of one_failed Trigger rule
  • Syntex: trigger_rule='one_success'
  • Example
    • Here, task_4 get triggered after successful completion of task_2, regardless of the state of task_3

6. none_failed

  • Triggered if parents haven’t failed (i.e. all succeeded or skipped)
  • Used to handle the skipped status.
  • Syntex: trigger_rule='none_failed'
  • Example
    • Here, task_4 get triggered as none of the parent tasks fails (i.e. task_2 succeeds and task_3 got skipped).

These are some basic trigger rules. For more knowledge, read the official documentation.

Code

The attached screenshot is the complete Example.

I hope you are now able to understand trigger rules in Apache Airflow. Stay tuned.

Read Apache-Airflow documentation for more knowledge.

To gain more information visit Knoldus Blogs.

Written by 

Kuldeep is a Software Consultant at Knoldus Software LLP. He has a sound knowledge of various programming languages like C, C++, Java, MySQL, and various frameworks like Apache Kafka and Spring/Springboot. He is passionate about daily and continuous improvement.

1 thought on “Triggers in Apache Airflow3 min read

  1. The all_failed trigger rule should only start the task once upstream tasks’ status is failed and not as retry ( shown in point 2).

Comments are closed.