Migrating from VM to Kubernetes Engine with Anthos

Reading Time: 6 minutes

Modern application development typically use microservices and containers to improve the application’s agility. Containers, Docker and Kubernetes provides the benefits of agility and portability to applications. But, what should we do with applications that are already existing as monoliths? Enterprises spend a lot of effort in modernizing their applications which could mean a long journey for them.

For some organizations, existing monolithic systems are holding back business initiatives and business processes that rely on them. And, for some the benefits of public cloud, like cost savings and higher levels of productivity, are often presented as an “all or nothing” choice. Also, rewriting existing applications to Kubernetes isn’t always possible or feasible to do manually. That’s where Migrate for Anthos can help. It provides an almost real-time solution for migration of an existing VM and make it available as a Kubernetes hosted pod with all the values associated with executing your applications in a Kubernetes cluster.

Anthos is a modern application management platform announced by Google at Next ’19. It provides the tools and technology you need for modern, hybrid, and multi-cloud solutions, all built on the foundations of GKE. It enables several features, including:

  • Infrastructure provisioning in both cloud and on-premises.
  • Infrastructure management tooling, security, policies and compliance solutions.
  • Streamlined application development, service discovery and telemetry, service management, and workload migration from on-premises to cloud.

Building Blocks of Anthos

Google Kubernetes Engine

Google Kubernetes Engine lies at the heart of Anthos. Customers can manage distributed infrastructure in on-premise data, Google’s cloud, as well as other cloud platforms, using the GKE control panel.

GKE On-Premise

Google is also delivering a software platform based on Kubernetes and consistent with GKE here. This means users can run this on compatible hardware of any kind and management of the platform will fall under Google’s purview. Actions right from upgrading to latest Kubernetes versions to placing in the most recent patches would be considered a logical extension of GKE as far as Google is concerned. So now, the GKE on-premise operates as a virtual appliance on VMware vSphere 6.5 whereas support for hypervisors such as KVM and Hyper-V is in the pipeline.


Istio service mesh is aimed at facilitating federated network management throughout the platform. For this purpose, Istio serves as a mesh holding together different applications’ components spread across GCP, data centres, and other clouds. It delivers on that count through seamless integration with software-defined networks such as ACI, Cisco, VMware NSX, and Google’s very own Andromeda. Customers already working with network appliances such as the F5 will be able to leverage Istio with firewalls and load balancers.


This cloud migration technology was acquired by Google last year to augment Kubernetes, and it does so by delivering on two significant capabilities – converting existing VMs into Pods (Kubernetes applications) and streaming on-premise virtual and physical machines to generate clones in GCE instances. Velostrata is the first-ever P2K (material to Kubernetes) migration tool to be built by Google, and this capability is being replicated with Anthos Migrate, which is currently in beta.

Anthos Compared With Others

Both AWS and Azure have hybrid cloud offerings in the likes of AWS Outposts and Azure Stack. But they are not the same, mostly for one single reason.

The main difference is that both AWS Outposts and Azure Stack are limited to combining on-premises infrastructure and the respective cloud provider itself, with no support for other cloud providers, unlike Anthos which manages hybrid multi-cloud environments, not just hybrid cloud environments, making it a unique offering for multi-cloud environment users.

Quickstart Using a Linux VM

Let’s start with a very basic example of migrating a Linux VM to a Kubernetes cluster. It is much simpler to do it through GCP Console (UI), but CLI gives much more control so we’ll be using gcloud sdk unless specified.

Before you begin

Let’s provision a project that will be hosting our main Kubernetes cluster. If you don’t have it installed, you can follow this. Open terminal and type the following commands:

export ZONE=<ZONE>

gcloud init
gcloud projects create ${PROJECT_ID} --name=${PROJECT_ID}
gcloud config set project $PROJECT_ID

Create K8 Cluster

Let’s now create our Kubernetes cluster on GCloud using Google Kubernetes Engine (GKE). I have chosen machine type as n1-standard-4 to make sure we have the room to run our workload as well as the additional service pods that come with Migrate for Anthos. Migrate for Anthos supports only certain operating systems for nodes for now so using the Ubuntu node image.

gcloud container --project ${PROJECT_ID} \
  clusters create ${CLUSTER_NAME} --zone ${ZONE} \
  --machine-type n1-standard-4  \
  --image-type "UBUNTU" \
  --num-nodes <number-of-nodes> \

The gcloud container clusters create command offers many configuration options that you might want to set. These include choosing node machine types, specifying the –network and –subnetwork, and enabling Alias IP addresses.

Create Source Machine

We will create a new per-defined machine with Google Compute Engine and install NGNIX web servers in that.

gcloud compute instances create ${SOURCE_MACHINE} --machine-type n1-standard-2 --zone $ZONE
gcloud compute ssh ${SOURCE_MACHINE} --zone $ZONE
sudo su -
apt-get update
apt-get install nginx -y

Setup Migrate for Anthos

The following commands will set up Migrate for Anthos on your machine.

gcloud container clusters get-credentials ${CLUSTER_NAME} --zone ${ZONE} --project ${PROJECT_ID}

migctl setup install

Verify the installation by using the following command:

migctl doctor

Add Source Machine

Now, specify the migration source you’re migrating from. We have chosen GCE as source.

migctl source create ce ${SOURCE_MACHINE} --project ${PROJECT_ID} --zone ${ZONE}

This command adds details needed to migrate from the source you specify — VMware, AWS, Azure, or Compute Engine. You give the source a name that you will use later when creating the migration itself. After you add the source, your cluster should have a new storage class whose name is your source name.

kubectl get storageclass

Create Migration Plan

We begin migrating VMs by creating a migration. This results in a migration plan file. This is implemented as a Kubernetes Custom Resource Definition (CRD) and is contained along with additional resources such as a Kubernetes PersistentVolumeClaim in the migration plan file. We can also edit the migration plan if we are well-versed with working on it.

migctl migration create my-migration --source ${SOURCE_MACHINE} --vmId ${SOURCE_MACHINE}

Execute Migration Plan

We begin the migration with a command that will generate target container artefacts and extracts them using the processing cluster you created while installing Migrate for Anthos.

migctl migration generate-artifacts my-migration

This command will begin the migration and as part of this process, it will:

  • Copy files and directories representing the VM to a container image registry as images.
  • Migrate for Anthos creates two images: a runnable image for deployment to another cluster and a non-runnable image layer that can be used to update the container image in the future. See Customizing a migration plan for information on how to identify these images.
  • Generate configuration YAML files that you can use to deploy the VM to Kubernetes cluster.

You can check the status of the migration with the following command.

migctl migration status my-migration

Cleaning up After Migration

When we are ready to deploy your workload to a production cluster, we need to first remove references to the PersistentVolumeClaim (PVC) and PersistentVolume (PV) created in the processing cluster. The cleanup step removes interim processing objects as well as the PVC and PV used but this doesn’t delete the underlying storage. To do so, we use the following command.

migctl migration delete my-migration

Deploy to Kubernetes

Now, after the migration process is done, all we are left is to deploy it to the Kubernetes cluster. One of the output generated by the migration is the deployment_spec.yaml. We’ll use it to deploy the workload to the Kubernetes cluster.

kubectl apply -f deployment_spec.yaml

Since all we are running just a simple webserver, all we need to do is expose it with a LoadBalancer and see it running.


One thing that’s important to note is that this app is our entire VM running inside a container. However, that’s also the cause of a lot of VM bloat that often isn’t needed in containers. Migrate for Anthos is a great way to help you consolidate and modernize some legacy applications and to get started in a larger modernization plan. Depending on your plan, the next step may be to start breaking apart your app into microservices or smaller containers.

As more companies embrace the cloud, it’s becoming obvious that there is no one approach to the technology. This is making multi-cloud and hybrid platforms more relevant, and where services such as Anthos can be real differentiators.



Written by 

Sudeep James Tirkey is a software consultant having more than 2 year of experience. He likes to explore new technologies and trends in the IT world. His hobbies include playing football and badminton, reading and he also loves travelling a lot. Sudeep is familiar with programming languages such as Java, Scala, C, C++ and he is currently working on DevOps and reactive technologies like Jenkins, DC/OS, Ansible, Scala, Java 8, Lagom and Kafka.