
Introduction to Apache Cassandra
Apache Cassandra is a type of No-SQL database. It handles large amounts of data across many commodity servers. Being a highly scalable and high-performance distributed database, it provides high availability with no single point of failure. Here in this blog, mainly I focused on Reads and writes in Cassandra. And For Cassandra architecture, you can refer to this blog Apache Casandra: Back to Basics. So let’s get started with this blog on Apache Cassandra: Reads and Writes.
Apache Cassandra vs MySQL
S.No | Cassandra | MySQL |
1 | Apache Cassandra is a type of No-SQL database | It is a type of Relational Database. |
2 | Apache Software Foundation developed Cassandra and released it in July 2008. | Oracle developed MySQL and released it in May 1995. |
3 | Apache Cassandra is written in JAVA. | MySQL is written in C and C++. |
4 | Cassandra does not provide ACID properties. It only provides AID property. | It provides ACID properties. |
5 | Read operation in Cassandra takes O(1) complexity. | Read operation in MySQL takes O(log(n)) complexity. |
6 | There is no foreign key in Cassandra. As a result, it does not provide the concept of Referential Integrity. | MySQL has a foreign key, so it supports the concept of Referential Integrity. |
Features of Apache Cassandra
Cassandra has several remarkable features. Some of its features include:
- Highly Distributed :
Each node in the cluster performs the same role.
The data set is distributed across the cluster and the scope of failure is negligible except that not every node has the Master to support service requests. - High Scalability:
Cassandra is highly scalable, enabling you to add more hardware to attach more customers and consequently, more data as per your requirement. - Fault-tolerant:
Cassandra is fault-tolerant. For instance, suppose there are 4 nodes in a cluster, each having a copy of the same data. In case, one of the nodes is no longer serving then the other three nodes can be served as per request.
Now let’s come to the main point of this blog that is Reads and Writes.
How Cassandra Writes?
Here, we will learn to write data in Cassandra.
Before dwelling into various steps which are employed in writing data in Cassandra, let us first learn some of the key terms. They are:
Commit log: The commit log is basically a transactional log. It’s an append-only file. We use it when we encounter any system failure, for transactional recovery. Commit log offers durability.
Memtable : Memtable is a memory cache that stores the copy of data in memory. It collects writes and provides the read for the data which are yet to be stored to the disk. Generally, Each node has a memtable for each CQL table.
SSTable (Sorted Strings Table): These are the immutable, actual files on the disk. This is a persistent file format used by various databases to take the in-memory data stored in memtables.
It then orders it for fast access and stores it on disk in a persistent, ordered, immutable set of files.
Immutable means SSTables are never modified. It is the final destination of the data in memtable.
Steps for writing in Cassandra:
- First of all, we can write Writes to any random node in the cluster (called Coordinator Node).
- We then write to commit log and then it writes data in a memory structure called memtable. The memtable stores write by sorting it till it reaches a configurable limit, and then flushes it.
- Every writes includes a timestamp.
- We put the memtable in a queue when the memtable content exceeds the configurable threshold or the commit log space, and then flush it to the disk (SSTable),
- The commit log is shared among tables. SSTables are immutable, and we cannot write them again after flushing the memtable. Thus, a partition is typically stored across multiple SSTable files.

How Cassandra Reads?
Reads in Cassandra is slower than Write but still very fast.
Firstly, here are some of the optimization techniques which needs quick and serious attention:
Key Caching :
For frequently accessed data, Key cache helps in reducing its seeks in the SSTable. In this process, the keys are the combination of the SSTable file descriptor and partition key. The values are the offset locations in SSTable files. So, we implement the key cache as a Map Structure.
Bloom Filter :
A bloom filter is a data structure acting as a filter. Therefore, It tells us whether an element is present in the SSTable or not.
Cassandra reads the data and meanwhile, processes it in several stages to get the data storage location. It starts with the data stored in memTable and finishes with SSTable.
Steps to read in Cassandra :
- First of all, Cassandra checks whether the data is present within the memtable. If it exists, Cassandra combines the data with SSTable and return the result.
- If the data is not present in memTable, Cassandra will try to read it from all SSTable along with using various optimisations.
- After that, Cassandra will be checking for the row cache. The row cache, if enabled, stores a subset of the partition data stored on disk in the SSTables in memory.
- Then it uses bloom-filter (it helps to point if a partition key exists in that SSTable) to determine if this particular SSTable contains the key.
- Suppose the bloom-filter determines a key to be existing on an SSTable, Cassandra will be checking the key cache subsequently. Key cache is an off-heap memory structure that stores the partition index.
- . Now If a partition key is present in key-cache, then the read process skips partition summary and partition index. Consequently, it goes directly to the compression offset map.
- If in compression offset map the partition key exists, then once the Compression Offset Map identifies the key, we can fetch the desired data from the correct SSTable.
- If the data is still not available, the coordinator node will request for some read repair.

Hope you enjoyed this blog in which I explained how Cassandra Reads and Writes.
References
Apache Cassandra Documentation
Very good content….