Please enable Javascript to view the contents

kind 实用指南

 ·  ☕ 2 分钟

1. 项目简介

kind 是使用容器管理 Kubernetes 集群的工具。项目地址 https://github.com/kubernetes-sigs/kind

主要用在:

  • 本地开发环境
  • 学习时的临时环境
  • 自动化测试

2. 安装 kind

  • macOS
1
brew install kind
  • Linux
1
2
curl -Lo /usr/local/bin/kind https://kind.sigs.k8s.io/dl/v0.21.0/kind-linux-amd64
chmod +x /usr/local/bin/kind

3. 创建 kind 集群

如果你本地配置有 PROXY,在创建之间建议重新设置一下环境变量:

1
2
export https_proxy=http://x.x.x.x:7890
export http_proxy=http://x.x.x.x:7890

本地代理通常设置为 http://127.0.0.1:7890,但 kind 访问不到,需要改成当前主机的 IP 地址 http://x.x.x.x:7890 。否则在部署应用时,会报错拉取不到镜像。

3.1 单节点

  • 创建单节点集群
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
kind create cluster --image=kindest/node:v1.23.6 --name=dev

Creating cluster "dev" ...
 ✓ Ensuring node image (kindest/node:v1.23.6) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-dev"
You can now use your cluster with:

kubectl cluster-info --context kind-dev

Have a nice day! 👋
  • 查看集群
1
2
3
4
kubectl get node

NAME                STATUS   ROLES                  AGE   VERSION
dev-control-plane   Ready    control-plane,master   71s   v1.23.6

3.2 多节点

  • 编辑配置文件 dev-multi-node.yaml
1
2
3
4
5
6
7
8
9
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
    image: kindest/node:v1.23.6
  - role: worker
    image: kindest/node:v1.23.6
  - role: worker
    image: kindest/node:v1.23.6
  • 创建多节点集群
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
kind create cluster --config dev-multi-node.yaml --name=dev-multi-node

Creating cluster "dev-multi-node" ...
 ✓ Ensuring node image (kindest/node:v1.23.6) 🖼
 ✓ Preparing nodes 📦 📦 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
 ✓ Joining worker nodes 🚜
Set kubectl context to "kind-dev-multi-node"
You can now use your cluster with:

kubectl cluster-info --context kind-dev-multi-node

Thanks for using kind! 😊
  • 查看集群
1
2
3
4
5
6
kubectl get node

NAME                           STATUS   ROLES                  AGE     VERSION
dev-multi-node-control-plane   Ready    control-plane,master   2m45s   v1.23.6
dev-multi-node-worker          Ready    <none>                 2m23s   v1.23.6
dev-multi-node-worker2         Ready    <none>                 2m23s   v1.23.6

4. kind 集群生命周期管理

  • 查看集群
1
2
3
4
kind get clusters

dev
dev-multi-node
  • 切换集群

注意这里的 context 的 kind-{集群名} 的格式。

1
kubectl cluster-info --context kind-dev
  • 删除集群
1
kind delete cluster --name dev-multi-node

5. 端口映射到主机

1
2
3
4
5
6
7
8
9
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
    extraPortMappings:
      - containerPort: 30000
        hostPort: 8000
        listenAddress: "0.0.0.0"
        protocol: tcp

在创建 kind 集群时,添加 extraPortMappings 参数,指定容器端口映射到主机端口。这里就是将 kind 集群的 30000 端口映射到本机 8000 端口。

6. 加载镜像到集群

1
kind load docker-image test:v1 --name dev

将本地构建的镜像,加载到 kind 集群中。

7. 配置网络

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  ipFamily: ipv4
  apiServerPort: -1
  apiServerAddress: 127.0.0.1
  podSubnet: "10.244.0.0/16"
  serviceSubnet: "10.96.0.0/16"
  disableDefaultCNI: false
  kubeProxyMode: "iptables"
  dnsSearch:
    - cluster.local

8. 配置运行时

1
2
3
4
5
6
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
  - |-
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry-1.docker.io"]
      endpoint = ["https://docker.nju.edu.cn"]    

这里给 containerd 配置了国内的镜像仓库 mirror 。

9. 开启 FeatureGates

1
2
3
4
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
featureGates:
  "AdmissionWebhookMatchConditions": true

参考 https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/ 开启 FeatureGates 。

10 挂载主机目录到集群

1
2
3
4
5
6
7
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraMounts:
    - hostPath: /Users/shaowenchen/kind-host
        containerPath: /host

将主机的目录 /Users/shaowenchen/kind-host 挂载到 kind 集群指定节点的 /host

11. 参考


微信公众号
作者
微信公众号