Rancher: Complete Container Management Platform

Reading Time: 4 minutes

Overview

Rancher is an open-source container management platform. It offers a complete set of infrastructure services for containers, networking, storage services, host management, and load balancing. Software teams can easily deploy and manage containerized applications by using a powerful container management tool.

High-level Architecture

The below figure depicts a Rancher server installation that manages two Kubernetes clusters: one Kubernetes cluster created by RKE and another Kubernetes cluster created by GKE.

Rancher 2.0 High-Level Architecture

Rancher Server Components

Rancher API Server

Rancher API server is built on top of an embedded Kubernetes API server and etcd database. All Rancher
specific resources that are being created using Rancher API, get translated to CRD (Custom Resource
Definition) objects, with their lifecycle being managed by one or several Rancher controllers. It includes the following functionalities:

  • User-facing API schema generation with an ability to plug custom formatters and validators.
  • Controller interfaces generation for CRDs and native Kubernetes objects types
  • Object lifecycle management framework
  • Conditions management framework
  • Simplified generic controller implementation by encapsulating TaskQueue and SharedInformer logic into a single interface

Management Controllers

The management controllers perform the activities that happen at the Rancher server level, not specific
to an individual cluster. The activities include:

  • Configuring access control policies to clusters and projects
  • Managing pod security policy templates
  • Provisioning clusters by invoking the necessary Docker machine drivers and invoking Kubernetesengines like RKE and GKE
  • Managing users – CRUD operations on users
  • Managing global-level catalog, fetching the content of the upstream Helm repo, etc.
  • Managing cluster and project-level catalogs
  • Aggregating and displaying cluster stats and events
  • Managing of node drivers, node templates, and node pools

User Cluster Controllers

User cluster controllers perform activities specific to a cluster. Activities include:

  • Managing workloads, which includes, for example, creating pods and deployments in each cluster
  • Applying roles and bindings that are defined in global policies into every cluster
  • Propagating information from cluster to server: events, stats, node info, and health
  • Managing network policies
  • Managing alerts, log aggregation, and CI/CD pipelines
  • Managing resource quota.
  • Propagating secrets down from Rancher server to individual clusters

User cluster controllers connect to API servers in GKE clusters directly but tunnel through the cluster
agent to connect to API servers in RKE clusters.

Authentication Proxy

The authentication proxy proxies all Kubernetes API calls. It integrates with authentication services like
local authentication, Active Directory, and GitHub. On every Kubernetes API call, the authentication
proxy authenticates the caller and sets the proper Kubernetes impersonation headers before forwarding
the call to Kubernetes masters. It communicates with Kubernetes clusters using a service account.
The authentication proxy connects to API servers in GKE clusters directly, but tunnels through the
cluster agent to connect to API servers in RKE clusters.

Rancher Agent Components

Cluster Agents

Rancher deploys one cluster agent for each Kubernetes cluster under management. The cluster agent
opens a WebSocket tunnel back to the Rancher server so that the user cluster controllers and authentication proxy can communicate with the user cluster Kubernetes API server. Note that only RKE clusters and imported clusters utilize the cluster agent to tunnel Kubernetes API. Cloud Kubernetes services like GKE already expose API endpoint on the public Internet and therefore do not require the
cluster agent to function as a tunnel.

Cluster agents serve two additional functions:

  • They serve as a proxy for other services in the cluster, like Rancher’s built-in alert, log aggregation, and CI/CD pipelines. In fact, any services running in user clusters can be exposed through the cluster agents. This capability is sometimes called “the magic proxy.”
  • During registration, cluster agents get service account credentials from the Kubernetes cluster and send the service account credentials to the Rancher server.

Node Agents

Node agents are primarily used by RKE to deploy the components during the initial install and follow-on
upgrades. Node agents, however, are deployed on cloud Kubernetes clusters like GKE even though they
are not needed for Kubernetes install and upgrade. Node agents serve several additional functions for
all clusters:

  • Fallback for cluster agents: if the cluster agent is not available for any reason, the Rancher server will use the node agent to connect to the Kubernetes API server.
  • Proxy for kubectl shell. Rancher server connects through node agents to tunnel the kubectl shell in the UI. Node agent runs with more privileges than cluster agent, and that additional privilege is required to tunnel the kubectl shell.

Installation

Rancher can be deployed in either a single node or multi-node setup. 

Single Node Install

It can be installed by running a single Docker container. In this installation scenario, you’ll install Docker on a single Linux host, and then deploy Rancher on your host using a single Docker container.

Run below command to install Rancher server:

sudo docker run -d --restart=always -p 8080:8080 rancher/rancher:latest

Open the browser and enter http://SERVER_IP_ADDRESS:8080. It will load web UI. Now, users can start adding hosts.

Execute the following command for launching Rancher server by bind mounting the MySQL volume:

sudo docker run -d -v :/var/lib/mysql --restart=always -p 8080:8080 rancher/rancher:latest

High Availability (HA) Install

Rancher can be installed on any Kubernetes cluster. This cluster can use upstream Kubernetes, or it can use one of Rancher’s Kubernetes distributions, or it can be a managed Kubernetes cluster from a provider such as Amazon EKS.

It is installed using the Helm package manager for Kubernetes. Helm charts provide templating syntax for Kubernetes YAML manifest documents.

helm install rancher rancher-alpha/rancher \
  --namespace knoldus-system \
  --set hostname=rancher.my.org \
  --set replicas=3

Important Rancher features

  • Cluster provisioning and import: Rancher lets you create new clusters or add existing ones to it
  • Concept of projects: Rancher introduces the concept of projects for better grouping of namespaces
  • Extended RBAC control: User permissions can be configured per project across clusters
  • Easy workload deployment: Users can use the Rancher UI to deploy their workloads without updating a YAML file
  • Monitoring and alerting: Allows users to create notifications and push cluster logs to different backends
  • Extensive application catalog: Similar to the app store on your smartphone, but for Kubernetes.
knoldus

Written by 

I am an DevOps engineer having experience working with the DevOps tool and technologies like Kubernetes, Docker, Ansible, AWS cloud, prometheus, grafana etc. Flexible towards new technologies and always willing to update skills and knowledge to increase productivity.

Leave a Reply