Getting started

Understanding your MigrateR architecture

The MigrateR platform infrastructure consists of an implementation of the Kubernetes cluster, Istio - a Service Mesh manager and Knative Serving - to support deploying and serving of serverless applications and functions.

_images/knative.png

Kubernetes was started by Google and, with its v1.0 release in July 2015, Google donated it to the Cloud Native Computing Foundation (CNCF) .

_images/kubarchitect.png

It consists of 2 main elements:

* Master Node
* Worker Node

The Master Node

A master node has the following components:

• API server
• Scheduler
• Controller manager
• etcd

The Master Node controller manager manages different non-terminating control loops, which regulate the state of the Kubernetes cluster. Each one of these control loops knows about the desired state of the objects it manages, and watches their current state through the API server. In a control loop, if the current state of the objects it manages does not meet the desired state, then the control loop takes corrective steps to make sure that the current state is the same as the desired state.

etcd is a distributed key-value store based on the Raft Consensus Algorithm, also used to store configuration details such as subnets, ConfigMaps, Secrets, etc.

The Worker Nodes

A worker node has the following components:

• Container runtime
• kubelet
• kube-proxy
_images/workernode.png

kubelet is an agent which runs on each worker node and communicates with the master node. It receives the Pod definition via various means (primarily, through the API server), and runs the containers associated with the Pod. It also makes sure that the containers which are part of the Pods are healthy at all times.

Accessing your MigrateR cluster

The cluster is a Single-Node, Single-Master, and Multi-Worker Installation.

In this cluster setup, we have a single master node, which also runs a single-node etcd instance. Multiple worker nodes are connected to the master node.

The API structure, known as the HTTP API Space, resides on this master node can be viewed as:

_images/api.png

kubelet Controller

Use the following commands to inspect your Kubernetes cluster using the Kubelet Controller:

• kubectl cluster-info
• kubectl config view

Other useful commands to inspect the various configurations within the cluster include:

• kubectl get namespaces
• kubectl get deployments
• kubectl get replicasets
• kubectl get pods
• kubectl get svc

HELM - package manager

Kubernetes Helm is the merged result of Helm Classic and the Kubernetes port of GCS Deployment Manager. The project was jointly started by Google and Deis, though it is now part of the CNCF. Many companies now contribute regularly to Helm.

Helpful commands

To begin working with Helm, run the ‘helm init’ command:

> $ helm init

This will install Tiller to your running Kubernetes cluster. It will also set up any necessary local configuration.

Common actions from this point include:

• helm search: search for charts
• helm fetch: download a chart to your local directory to view
• helm install: upload the chart to Kubernetes
• helm list: list releases of charts

Role-based Access Control

In Kubernetes, granting a role to an application-specific service account is a best practice to ensure that your application is operating in the scope that you have specified. Read more about service account permissions in the official Kubernetes docs.

Bitnami also has a fantastic guide for configuring RBAC in your cluster that takes you through RBAC basics .

This guide is for users who want to restrict Tiller’s capabilities to install resources to certain namespaces, or to grant a Helm client running access to a Tiller instance.

Service Mesh - Istio

The term service mesh is used to describe the network of microservices that make up such applications and the interactions between them. As a service mesh grows in size and complexity, it can become harder to understand and manage. Its requirements can include discovery, load balancing, failure recovery, metrics, and monitoring. A service mesh also often has more complex operational requirements, like A/B testing, canary releases, rate limiting, access control, and end-to-end authentication.

Istio

Istio lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices. It makes it easy to create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more, without any changes in service code. You add Istio support to services by deploying a special sidecar proxy throughout your environment that intercepts all network communication between microservices, then configure and manage Istio using its control plane functionality.

  • Citadel for key and certificate management
  • Sidecar and perimeter proxies to implement secure communication between clients and servers
  • Pilot to distribute authentication policies and secure naming information to the proxies
  • Mixer to manage authorization and auditing
_images/istio.png

Knative

Knative Serving builds on Kubernetes and Istio to support deploying and serving of serverless applications and functions. Serving is easy to get started with and scales to support advanced scenarios.

Serving resources

Knative Serving defines a set of objects as Kubernetes Custom Resource Definitions (CRDs). These objects are used to define and control how your serverless workload behaves on the cluster:

Service: The service.serving.knative.dev resource automatically manages the whole lifecycle of your workload. It controls the creation of other objects to ensure that your app has a route, a configuration, and a new revision for each update of the service. Service can be defined to always route traffic to the latest revision or to a pinned revision.

Route: The route.serving.knative.dev resource maps a network endpoint to a one or more revisions. You can manage the traffic in several ways, including fractional traffic and named routes.

Configuration: The configuration.serving.knative.dev resource maintains the desired state for your deployment. It provides a clean separation between code and configuration and follows the Twelve-Factor App methodology . Modifying a configuration creates a new revision.

Revision: The revision.serving.knative.dev resource is a point-in-time snapshot of the code and configuration for each modification made to the workload. Revisions are immutable objects and can be retained for as long as useful.

_images/Kserving.png