In this blog, we are going to discuss an amazing tool used for service discovery and configuring services using Consul.
Hold on a second!! There must be some questions in your mind:
- Aren’t there so many tools already doing that and are being used by everyone?
- Wow !! I have been using tools like Zookeeper for service discovery, And they are pretty good.
- Now, do I really need to know about this tool?
Well, my answer for you is YES!! And now, let’s see why!!
Step 1: Let’s first understand what is Consul!!
Consul is a tool for discovering and configuring services in your infrastructure. It provides you features that could be very helpful in managing your services, post deployments. Features like: Service discovery, health-check, dynamic configuration, load-balancing, telemetry, etc.
Again a question, there are a lot of features, can we bound them into categories?
Well, of course, according to Consul, here are the key feature categories that Consul provides:
- Service Discovery: Clients of Consul can provide a service, such as api or mysql, and other clients can use Consul to discover providers of a given service. Using either DNS or HTTP, applications can easily find the services they depend upon.
- Health Checking: Consul clients can provide any number of health checks, either associated with a given service (“is the web-server returning 200 OK”), or with the local node (“is memory utilization below 90%”). This information can be used by an operator to monitor cluster health, and it is used by the service discovery components to route traffic away from unhealthy hosts.
- KV Store: Applications can make use of Consul’s hierarchical key/value store for any number of purposes, including dynamic configuration, feature flagging, coordination, leader election, and more. The simple HTTP API makes it easy to use.
- Multi-Datacenter: Consul supports multiple data-centers out of the box. This means users of Consul do not have to worry about building additional layers of abstraction to grow to multiple regions.
Step 2: How Consul works?
There’s a lot going on under Consul’s architecture. And to discuss and understand that, we might have to put some more hours. You can go through Consul’s documentation for the same. But for beginners, here’s what you need to know:
- Every node that provides services to Consul runs a Consul agent. Running an agent is not required for discovering other services or getting/setting key/value data. The agent is responsible for health checking the services on the node as well as the node itself.
- The agents talk to one or more Consul servers. The Consul servers are where data is stored and replicated. The servers themselves elect a leader. While Consul can function with one server, 3 to 5 is recommended to avoid failure scenarios leading to data loss. A cluster of Consul servers is recommended for each data-center.
- Components of your infrastructure that need to discover other services or nodes can query any of the Consul servers or any of the Consul agents. The agents forward query to the servers automatically.
- Each data-center runs a cluster of Consul servers. When a cross-data-center service discovery or configuration request is made, the local Consul servers forward the request to the remote data-center and return the result.
- Consul uses a gossip protocol to manage membership and broadcast messages to the cluster. All of this is provided through the use of the Serf library. The gossip protocol used by Serf is based on “SWIM: Scalable Weakly-consistent Infection-style Process Group Membership Protocol”, with a few minor adaptations.
- Gossip is done over UDP with a configurable but fixed fanout and interval. This ensures that network usage is constant with regards to the number of nodes. Complete state exchanges with a random node are done periodically over TCP, but much less often than gossip messages. This increases the likelihood that the membership list converges properly since the full state is exchanged and merged. The interval between full state exchanges is configurable or can be disabled entirely.
- Failure detection is done by periodic random probing using a configurable interval. If the node fails to ack within a reasonable time (typically some multiple of RTT), then an indirect probe is attempted. An indirect probe asks a configurable number of random nodes to probe the same node, in case there are network issues causing our own node to fail the probe. If both our probe and the indirect probes fail within a reasonable time, then the node is marked “suspicious” and this knowledge has gossiped to the cluster. A suspicious node is still considered a member of the cluster. If the suspect member of the cluster does not dispute the suspicion within a configurable period of time, the node is finally considered dead, and this state has then gossiped to the cluster.
Step 3: Again the same question, Why? Why Consul? Let’s see!!
There are a lot of tools that perform one or the other feature provided by Consul. Some might even do it a bit better. But, the problems Consul solves are varied and there is no single system that provides all the features of Consul, there are other options available to solve some of these problems. According to me, the main key feature of Consul, is to bundle up all these features together under one umbrella.
For the detailed comparison between different tools and Consul, we can go through the following link.
Final Step: Setting up Consul on local.
- Download link
- Start a Consul cluster in dev mode: $ ./consul agent -dev
- Default port to look for Consul’s UI: 8500.
- For metadata overview: curl localhost:8500/v1/catalog/nodes
- To see members of Consul cluster: ./consul members
The above points also give us a peek at how we can access Consul. And the ways are:
- Code-implementation using libraries.
In our next blog, we will be covering the “configuring services” part using KV-Store. So stay connected.
Thanks for giving your time to go through this blog. Keep going through our blogs, because we at Knoldus believe in gaining knowledge by sharing it across.