Prolance is a tool that can monitor the running protocols on the server. So with the help of this tool, we can keep track of all the protocols. As of now, we have implemented this tool for two protocols those are Dynamic Host Configuration Protocol (DHCP) and Active Directory (AD).
Dynamic Host Configuration Protocol(DHCP):
Dynamic Host Configuration Protocol (DHCP) is a network management protocol used to dynamically assign an Internet Protocol (IP) address to any device, or node, on a network so they can communicate using IP. DHCP maintains the unique IP address of the host using a DHCP server.
Active Directory is a primary feature of Windows Server. Windows Server is an operating system that runs both local and Internet-based servers. It also allows to manage the user accounts and resources, apply policies consistently as needed by an organisation.
Rust is a multi-paradigm system programming language that can provide better memory safety while maintaining high performance. It runs blazingly fast, prevents almost all crashes, and eliminates data races. It has an exceptional feature of memory management which can be achieved by the Ownership Concept.
Rust also comes with a fairly nice build system (cargo) that keeps your libraries up to date or locks you to a specific version. It also provides powerful features such as zero-cost abstractions, safe memory management, fearless concurrency and many more.
It also prevents segmentation faults and guarantees thread safety.
Kafka is used to build real-time streaming data pipelines and real-time streaming applications. A data pipeline reliably processes and moves data from one system to another, and a streaming application is an application that consumes streams of data.
Kafka provides three main functions to its users:
- Publish and subscribe to streams of records
- Effectively store streams of records in the order.
- Process streams of records in real-time.
As I already mentioned above that the Active Directory is a primary feature of Windows Server. So, we are using Windows Server 2012 in which we have configured our both protocols Dynamic Host Configuration Protocol and Active Directory. Both protocols can save all the activities running on the protocols in the form of logs. Then to Monitor the activities of the protocols, we will work upon the logs generated by these protocols.
We have two scenarios to process the logs for monitoring:
Scenario 1: We can directly compress the raw logs generated by protocols and Stream it on the Kafka topic from where we can monitor the activities in real-time.
Scenario 2: To monitor efficiently we want to filter the logs as per our requirement like in case of Dynamic Host Configuration Protocol we want to monitor the device names and the IP Address assigned to that device. So in this scenario, we want to filter the logs then stream the logs on the Kafka topic in compressed form.
Key features of Prolance:
- Monitor the running activities of the protocols.
- Filter out the activity logs as per our monitoring requirements.
- The logs should stream continuously on the Kafka topic.
- The logs should stream in the compressed form.
- The user can schedule this process according to his requirements.
Monitor the running activities of the protocols:
As we know that the DHCP can provide the IP addresses to the client devices so we can monitor the devices on network by the logs of DHCP. And in Active Directory, we can monitor the Active Directory users, IP addresses assigned to that user and the Device name of the user by the logs of Active Directory.
Filter out the activity logs as per our monitoring requirement:
We can filter out the logs as per our monitoring requirements. Like in DHCP we want to monitor the IP addresses and Device names so we want to filter the IP address and Device name from the DHCP logs.
Example: 192.168.1.100 Knoldus-2856
The logs should be stream continuously on the Kafka topic:
To monitor the devices in real-time, we need continuous logs on the Kafka Topic.
The logs should be stream in the compressed form:
The number of logs on the network will be high so we need to stream the logs in the compressed form. We have used Gzip Compression in this project which can automatically compress the data at the time of streaming.
The user can schedule this process according to his requirements:
To get the continuous data we have to schedule the program according to our requirements. So, we have added job scheduler in our project which can run the program in every given interval of time.
Thanks for reading this blog!!!