CONTAINERD Kubernetes and safety sandbox

Author

Containerd is an open source industry standard container running, focusing on simple, stable, and portable while supporting Linux and Windows.

! [1_JPEG] (https://yqfile.AlicDn.com/7bb34fd60db56373632b0c949a7090fc075ab456.jpeg)

* On 14 December 2016, Docker announced that Docker Engine’s core component Containerd has been donated to a new open source community independent development and operation. Alibaba Cloud, AWS, Google, IBM and Microsoft act as an initial member to jointly build a Containerd community;

* March 2017, Docker donated ContainerD to CNCF (Yunyuan Calculation Foundation). Containerd has been quickly developed and widely supported;

* The Docker engine has used ContainerD as the basis of the container lifecycle management, and Kubernetes also officially supports Containerd as a container runtime manager in May 2018;

* In February 2019, CNCF announced that Containerd graduated and became a production available project.

The Container Runtime Interface (CRI) support has been built from the 1.1 version, further simplifies support for Kubernetes. Its architecture is as follows:

! [2_jpeg] (https://yqfile.Alicdn.com/cebd07a2f58c10144a8a459377aadd66278a5bec.jpeg)

In the Kuberneserts scene, Containerd has less resource occupied and faster startup speed than the full Docker Engine.

! [3_jpeg] (https://yqfile.Alicdn.com/f5ddfe8f88a9af1a2ebba8e56e94b61923bd3d1b.jpeg)

! [4_jpeg] (https://yqfile.alicdn.com/704e219fb0977b28efa990e7d0119966608cd02b.jpeg)

Image Source: [Containerd] (https://yq.aliyun.com/go/ArticlerenderRedirect?url=https%3a%2f%2f2018%2f05%2f24%2fkubernetes-containerd-integration-goes-ga-ga % 2F)

Red Hat Leaded CRI-O is a container runtime management project with Containerd competition. Compared with CRI-O projects, Containerd has a more wide range of properties in terms of community support.

! [5_jpeg] (https://yqfile.Alicdn.com/9cc0986cdfa64e596b355305d346da7ec72156fe.jpeg)

Image Source: [Https://yq.aliyun.com/go/ArticlerenderRedirect?url=https%3a%2f%2fwww.infoq.cn%2FArticle%2Fodslclsjvo8BNX)

More importantly, Containerd provides a flexible expansion mechanism that supports various container runtime compliant with OCI (Open Container Initiative), such as RUNC containers (also known Docker containers), security sandbox containers such as Katacontainer, Gvisor and Firecraker.

! [6_jpeg] (https://yqfile.Alicdn.com/205d974617757b8879d15a0587f8f14ae4ca8ab3.jpeg)

In the Kubernes environment, you can use different APIs and command line tools to manage containers / PODs, mirroring and other concepts. In order to facilitate everyone, we can use the following figure how to use different levels of API and CLI management container lifecycle management.

! [7_jpeg] (https://yqfile.alicdn.com/f4e80a56eb565ca847586e919b997bb0e7fc6476.jpeg) * kubectl: is a command line tool for the cluster level, support Kubernetes’s basic concept

* [Crictl] (https://yq.aliyun.com/go/articleRenderRedirect?spm=a2c4e.11153940.0.0.60875022IQ6dSV&url=https%3A%2F%2Fgithub.com%2Fcontainerd%2Fcri%2Fblob%2Fmaster%2Fdocs%2Fcrictl. MD): It is a command line tool for CRI on the node

* [Ctr] (https://yq.aliyun.com/go/articleRenderRedirect?spm=a2c4e.11153940.0.0.60875022IQ6dSV&url=https%3A%2F%2Fgithub.com%2Fcontainerd%2Fcontainerd%2Fblob%2Fmaster%2Fdocs%2Fman% 2FCTR.1.md): It is a command line tool for Containerd

Experience

MINIKUBE is the easiest way to experience Containerd as the runtime of the Kubernetes container, and we use it as a Kubernetes container running and support two different implementations of RUNC and Gvisor.

Many friends can’t use official minikube to experiment directly due to network access reasons. In the latest MINIKube version 1.5, a perfect configuration is provided to help you use Alibaba Cloud’s mirror address to get the required Docker mirroring and configuration while supporting different containers such as Docker / Containerd. We [create] (https://yq.aliyun.com/articles/221687?spm=a2c4e.11153940.0.0.0.60875022iq6dsv) A minikube virtual machine environment, pay attention to the need to indicate the `–container-runtime = containerd` parameter settings Containerd as The container is running. At the same time, Registry-mirror also replaces its own Ali Cloud mirror acceleration address.

“ `

$ minikube start –Image-mirror-country cn n – ISO-url = https: //kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.5.0.Iso n– Registry-mirror = https: //xxx.mirror.aliyuncs.com n – Container-Runtime = Containerd

MINIKUBE V1.5.0 on Darwin 10.14.6

Automatically selected the ‘Hyperkit’ Driver (Alternates: [VirtualBox])

? You cannot access the known memory library in your location. Being registry.cn-hangzhou.aliyuncs.com/google_containers is used as a backup repository.

Creating a Hyperkit virtual machine (CPUS = 2, Memory = 2000MB, Disk = 20000MB) …

? VM IS Unable to connection to the successd image repository: Command failed: curl -ss https://k8s.gcr.io/

STDOUT:

STDERR: CURL: (7) Failed to Connect To K8s.gcr.io Port 443: Connection Timed Out

: Process EXITED with STATUS 7

Preparing Kubernetes v1.16.2 in Containerd 1.2.8 … pull mirror …

Kubernetes is starting …

? Waiting for: ApiServer etcd scheduler controller

Finish! Kubectl has been configured to “Minikube”

$ MINIKUBE DASHBOARD

Verifying Dashboard Health …

Launching proxy …

Verifying proxy health …

Opening http://127.0.0.1:54438/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser …

“ `

! [] (Data: image / gif; base64, R0lGODlhAQABAPABAP /// wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw ==) []! (Data: image / gif; base64, R0lGODlhAQABAPABAP /// wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw == “click and drag to move”)

Deploy test application

——–

We deploy a nginx application through POD:

“ `

$ Cat nginx.yaml

APIVERSION: V1

Kind: POD

Metadata:

Name: nginx

SPEC:

Containers:

– Name: nginx

iMage: nginx

$ Kubectl Apply -f nginx.yaml

POD / NGINX CREATED

$ Kubectl EXEC NGINX – Uname -a

Linux Nginx 4.19.76 # 1 SMP fri Oct 25 16:07:41 PDT 2019 x86_64 gnu / linux

“ `

! [] (Data: image / gif; base64, R0lGODlhAQABAPABAP /// wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw ==) []! (Data: image / gif; base64, R0lGODlhAQABAPABAP /// wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw == “click and drag to move”)

Then, we open MINIKUBE to Gvisor support:

“ `

$ MINIKUBE ADDONS Enable Gvisor

Gvisor Was SuccessFully enabled

$ Kubectl get Pod, RuntimeClass Gvisor -n Kube-System

Name Ready Status Restarts AGE

POD / GVISOR 1/1 Running 0 60M

Name Created AT

Runtimeclass.Node.k8s.io/gvisor 2019-10-27T01: 40: 45Z

$ Kubectl Get RuntimeClass

Name Created AT

Gvisor 2019-10-27T01: 40: 45Z

“ `

! [] (Data: image / gif; base64, R0lGODlhAQABAPABAP /// wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw ==) []! (Data: image / gif; base64, R0lGODlhAQABAPABAP /// wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw == “click and drag to move”)

When `gvisor` pod enters` Running` status, you can deploy Gvisor test applications.

We can see a “runtimeclassname” that has been registered in the K8S cluster. Thereafter, developers can achieve time when “runtimeclassname” in the POD declaration is displayed. For example, we have created a Nginx application running in a Gvisor sandbox container. “ `

$ Cat nginx-untrusted.yaml

APIVERSION: V1

Kind: POD

Metadata:

Name: nginx-untricusted

SPEC:

RuntimeclassName: gvisor

Containers:

– Name: nginx

iMage: nginx

$ Kubectl Apply -f nginx-untrusted.yaml

POD / NGINX-UNTRUSTED CREATED

$ Kubectl EXEC NGINX-UNTRUSTED – UNAME -A

Linux nginx-untrusted 4.4 # 1 SMP Sun Jan 10 15:06:54 PST 2016 x86_64 gnu / linux

“ `

! [] (Data: image / gif; base64, R0lGODlhAQABAPABAP /// wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw ==) []! (Data: image / gif; base64, R0lGODlhAQABAPABAP /// wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw == “click and drag to move”)

We can clearly discover: Since the RUNC-based container and the host shared operating system kernel, the OS kernel version of the RUNC container is identical to the MINIKube host OS kernel version; the Gvisor’s RUNSC container uses a separate kernel, it and minikube The host OS kernel version is different.

It is precisely because each sandbox container has an independent kernel, reduces the security attack surface, and has better safety isolation characteristics. Suitable for unusual applications, or multi-tenant scenes. Note: Gvisor In Minikube, the kernel call is intercepted by PTRACE, which has a large performance loss, and the compatibility of Gvisor needs to be enhanced.

Use CTL and CRICTL tools

——————

We can now enter the MINIKube virtual machine:

“ `

$ MINIKube SSH

“ `

! [] (Data: image / gif; base64, R0lGODlhAQABAPABAP /// wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw ==) []! (Data: image / gif; base64, R0lGODlhAQABAPABAP /// wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw == “click and drag to move”)

Containerd supports isolation of container resources through namespace, checks the existing Containerd namespace:

“ `

$ sudo ctr namespa

Name labels

K8s.io

“ `

! [] (Data: image / gif; base64, R0lGODlhAQABAPABAP /// wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw ==) []! (Data: image / gif; base64, R0lGODlhAQABAPABAP /// wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw == “click and drag to move”)

“ `

# List all container images

$ sudo ctr –Namespace = k8s.io images ls

# List all container lists

$ sudo ctr –Namespace = k8s.io containers ls

“ `

! [] (Data: image / gif; base64, R0lGODlhAQABAPABAP /// wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw ==) []! (Data: image / gif; base64, R0lGODlhAQABAPABAP /// wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw == “click and drag to move”) in Kubernetes The simple environment is a simpler way to operate PODS using `CRICTL`.

“ `

# View POD list

$ sudo crictl pods

Pod ID CREATED State Name Namespace Attempt

78bd560a70327 3 Hours ago ready nginx-untrusted default 0

94817393744fd 3 Hours ago ready nginx default 0

# View the name containing the POD of Nginx

$ sudo crictl pods –name nginx -v

ID: 78BD560A70327F14077C441AA40DA7E7AD52835100795A0FA9E5668F41760288

Name: nginx-untricusted

Uid: DDA218B1-D72E-4028-909D-55674FD99EA0

Namespace: Default

Status: Ready

Created: 2019-10-27 02: 40: 02.660884453 +0000 UTC

Labels:

IO.kubernetes.pod.name -> nginx-untricusted

IO.KUBERNES.POD.NAMESPACE -> DEFAULT

IO.KUBERNES.POD.UID -> DDA218B1-D72E-4028-909D-55674FD99EA0

Annotations:

Kubectl.kubernetes.io/LAST-APPLIED-CONFIGURATION -> {“Apive”: “V1”, “Kind”: “POD”, “Metadata”: {{}, “name”: “NGINX-UNTRUSTED “,” namespace “:” default “},” spec “: {” containers “: [{” image “:” nginx “,” name “:” nginx “}],” runtimeclassname “:” gvisor “}}

Kubernetes.io/config.seen -> 2019-10-27t02: 40: 00.675588392z

Kubernetes.io/config.source -> API

ID: 94817393744FD18B72212A00132A61C6CC08E031AFE7B5295EDAFD3518032F9F

Name: nginx

Uid: BFCF51DE-C921-4A9A-A60A-09FAAB1906C4

Namespace: Default

Status: Ready

CREATED: 2019-10-27 02: 38: 19.724289298 +0000 UTC

Labels:

IO.KUBERNES.POD.NAME -> Nginx

IO.KUBERNES.POD.NAMESPACE -> DEFAULT

IO.KUBERNES.POD.UID -> BFCF51DE-C921-4A9A-A60A-09FAAB1906C4

Annotations:

Kubectl.kubernetes.io/LAST-APPLIED-CONFIGURATION -> {“Apive”: “V1”, “Kind”: “POD”, “metadata”: {}, “name”: “nginx”, “Namespace”: “default”}, “spec”: {“containers”: [{“image”: “nginx”, “name”: “nginx”}]}} kubernetes.io/config.seen -> 2019- 10-27T02: 38: 18.206096389Z

Kubernetes.io/config.source -> API

“ `

! [] (Data: image / gif; base64, R0lGODlhAQABAPABAP /// wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw ==) []! (Data: image / gif; base64, R0lGODlhAQABAPABAP /// wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw == “click and drag to move”)

Containerd and Docker relationship

———————–

Many students care about the relationship between Containerd and Docker, and whether Container can replace Docker?

ContainerD has become the mainstream implementation of the container runtime, and has also received strong support from the Docker community and the Kubernetes community. The container lifecycle management of Docker Engine is also based on Containerd implementation.

! [8] (https://yqfile.AlicDn.com/ffdf214128393a709be44b6262ff1c8c683505a8.png)

But Docker Engine contains more developer toolchain, such as mirroring. It also includes the ability of Docker’s own log, storage, network, and swarm. In addition, most container ecological vendors, such as security, monitoring, development, etc., the support of Docker Engine is relatively complete, and support for Containerd is gradually complement.

So in the Kubernetes runtime environment, users who have more attention to security and efficiency and customization, more attention can choose Containerd as a container runtime environment; for most developers, continue to use Docker Engine as a container running is also a nice choice.

Alibaba Cloud Container Support for Containerd

———————–

In Ali Cloud Kubernetes Service Ack, we have used Containerd as a container runtime management to support the hybrid deployment of safety sandbox containers and RUNC containers. In existing products, we and Alibaba Cloud operating system teams and ants will support lightweight virtual Runv sandbox containers, 4Q will also publish Intel SGX-based credits and security teams. Sandbox container.

! [9_jpeg] (https://yqfile.AlicDn.com/12914c04422a8330db2cd6e881427a65d2b9ef9b.jpeg)

Specific product information can be referred to [HTTPS: //Help.aliyun.com/document_detail/140541.html?spm=a2c4e.11153940.0.0.60875022 IQ6DSV).

In Serverless Kubernetes (ASK), we also use the Containerd flexible plugin mechanism to customize and cut the container running to the Nodeless environment.

[Description link] (https://yq.aliyun.com/articles/727308?utm_content=g_1000089999) [CONTAINERD and security sandbox Kubernete Quality experience] (https://yq.aliyun.com/articles/727308?utm_content= G_1000089999) [description link] (https://yq.aliyun.com/articles/727308?utm_content=g_100008999) This article is original content of Yunqi Community, which is not allowed to reprint.

Related Posts