Applying Cache and Rate Limiting features in APIGEE

Reading Time: 2 minutes

Introduction

In today’s world protecting our APIs from unwanted hits is a must, In the upcoming 5 min, we will learn how to protect our backend database of API from unwanted hits/calls with the help of the Apigee gateway tool.
For basics of Apigee go through https://blog.knoldus.com/getting-started-with-apigee/

We will discuss two policies RateLimiting and Cache Mechanism here

Rate Limiting in APIGEE

Let’s understand the rate-limiting policy, Suppose we want to limit the no of requests of any particular user for a specific time interval. This is exactly what the rate-limiting policy says.


Example:- We want to allow only 10 requests/minute so no any us will be able to hit our API more than 10 times within 1 minute.

If a user tries to hit the API more than 10 times they will get the warning message of quota violation. The sample message is attached below.

    <Interval ref="apiproduct.developer.quota.interval">1</Interval>
    <TimeUnit ref="apiproduct.developer.quota.timeunit">minute</TimeUnit>
    <Allow count="10" countRef="apiproduct.developer.quota.limit"/>
    <Identifier ref="apigee.client_id"/>

In the above program, you can see the Interval(here it’s 1) for which you want to apply the policy.

TimeUnit defines the unit of time it can be second, minute hour, etc.(here it’s minute)

Finally, the Allow count is how many requests you like to allow for the particular interval of time

{  
   "fault":{  
      "detail":{  
         "errorcode":"policies.ratelimit.QuotaViolation"
      },
      "faultstring":"Rate limit quota violation. Quota limit  exceeded. Identifier : _default"
   }
}

In case we want to increase the quota limit then we can simply modify the Interval or Allow count value.

Cache Policy

How to cache output or response for 6 minutes?

The use case of the Cache mechanism can be understood as if the same request is coming to our API within the specified period. The response will be sent to the user from cache only, which leads to protecting the backend database from unnecessary calls.

<DisplayName>Response Cache-1</DisplayName>
    <Properties/>
    <CacheKey>
        <Prefix/>
        <KeyFragment ref="request.verb" type="string"/>
        <KeyFragment ref="request.uri" type="string"/>
    </CacheKey>

    <ExpirySettings>
        <ExpiryDate/>
        <TimeOfDay/>
        <TimeoutInSec ref="">901</TimeoutInSec>
    </ExpirySettings>

Here Display name is just the name of the policy. CacheKey is something where we define how we want to cache the response of API calls. The cache is stored in the form of (key, value) pair, the key can be only the Uri or Uri along with some other field.

In the above example, we can see the key as Uri along with verb(header of API call). The next portion of the code says for how long we want to keep the cache in memory. We have defined here as 901 seconds.

Conclusion: Let’s Apply Cache and Rate Limiting together

We can choose to apply both mechanisms at once in order to optimize the API performance and protect the database together. We can understand it via the following flow chart.

Written by 

Just another person who has some good exposure to Data Engineering. Scala | Spark | AKKA | Kafka

Leave a Reply