Data purge is the key to keeping your database servers always have enough free space(ideally) to store the ever incoming humongous business data.The operational/transactional database needs to be cleaned up of old data which is no longer necessary as per your business rules.
Cassandra being one of the most popular databases on the planet does offer a really simplistic approach to support your data purge needs, that is, an expiration time for each of the records that goes inside. The technique is known as TTL(time to live).
After the expiration time has reached or time to live has ended,the record is automatically marked for deletion by what is known as putting a tombstone on that record.
There are a set of CQLSH commands to utilize this feature:
- Use the
INSERTcommand to set employee details in the mytable table to expire in 86400 seconds, or one day.
cqlsh> INSERT INTO mykeyspace.mytable (emp_id, emp_name) VALUES (200, 'Franc') USING TTL 86400;
- Extend the expiration period to three days by using the
UPDATEcommand and change the employee name.
cqlsh> UPDATE mykeyspace USING TTL 259200 SET emp_name = 'Frank' WHERE emp_id = 200 ;
- Delete a column’s existing TTL by setting its value to zero.
cqlsh> UPDATE mykeyspace.mytable USING TTL 0 SET emp_name = 'somefancyname' WHERE emp_id = 200 ;
- To check how much time is left for a record before it gets deleted :
cqlsh> SELECT emp_id,TTL(emp_name)
Using TTL with Cassandra sink connectors
Best feature of this comes into picture when you’re using a sink connector to insert data in Cassandra from a Kafka topic on-the-fly. You can use the KCQL to specify a TTL right there to ensure your records are inserted with a default expiration time.
“connect.cassandra.kcql”: “INSERT INTO mytable SELECT * FROM my-topic TTL=31536000”
That value 31536000 is actually equivalent to a year in seconds.
I hope this blog finds you well!