阿里云ECS部署Kubernetes流程

第一步,购买三台云主机

系统 ubuntu 20.04 64位
配置 2 vCPU 4 GiB
机器名称 master-k8s, node1-k8s, node2-k8s

第二步,登录机器,关闭swap

以下命令没有特殊说明的,三台机器都要执行

# 临时关闭
root@master-k8s:~# swapoff -a
# 永久关闭
root@master-k8s:~# sed -ri 's/.*swap.*/#&/' /etc/fstab

第三步,将网桥的ip4流量转接到iptables

cat > /etc/sysctl.d/k8s.conf << EOF 
net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1 
EOF
-- 效果 --
root@master-k8s:~# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

使流量转接生效

sysctl --system

第四步,安装docker

docker是目前最流行的容器运行时

apt-get remove docker docker-engine docker.io containerd runc  # 卸载原有docker, 新机器不用执行
apt-get update
apt-get install ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
 echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
apt-get update
apt-get install -y docker-ce docker-ce-cli containerd.io    # 安装docker全家桶
docker run hello-world  # 验证docker是否安装成功

输出如下表示安装成功

...

Hello from Docker!
...

第五步,配置docker镜像加速

cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

systemctl restart docker

第六步,配置Kubernetes镜像源

apt-get update && apt-get install -y apt-transport-https
curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

sudo apt-get update

第七步,安装Kubernetes

apt-get install -y kubelet=1.23.0-00 kubeadm=1.23.0-00 kubectl=1.23.0-00

第八步,启动部署Kubernetes master

初始化kubeadm(仅在master上进行)

kubeadm init \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.23.0 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16 \
  --ignore-preflight-errors=all

输出效果部分如下

...
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.30.70.63:6443 --token lddban.k77fwxj5fmyt3wqj \
    --discovery-token-ca-cert-hash sha256:71a7c5198a7ddfa441da244d16efe5508d640a9cbea0fbeab1bc1fca7cd153b2

配置文件处理(仅在master上进行)

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
source ~/.bashrc

第九步,将node加入集群

仅在node1-k8s和node2-k8s上执行

kubeadm join 172.30.70.63:6443 --token lddban.k77fwxj5fmyt3wqj \
    --discovery-token-ca-cert-hash sha256:71a7c5198a7ddfa441da244d16efe5508d640a9cbea0fbeab1bc1fca7cd153b2

注意删除空格

echo 'export KUBECONFIG=/etc/kubernetes/kubelet.conf' >> $HOME/.bashrc
chown $(id -u):$(id -g) /etc/kubernetes/kubelet.conf
source ~/.bashrc

此时回到,master-k8s,执行如下检查

root@master-k8s:~# kubectl get nodes
NAME         STATUS     ROLES                  AGE     VERSION
master-k8s   NotReady   control-plane,master   8m51s   v1.23.0
node1-k8s    NotReady   <none>                 87s     v1.23.0
node2-k8s    NotReady   <none>                 82s     v1.23.0

第十步,配置网络,检查节点状态

以上,会发现节点已经挂接到集群,但是当前都是notready状态,这是因为集群网络还没有配置,当前k8s主流网络配置方案是calico,配置方式如下:

# 在主节点执行
wget https://docs.projectcalico.org/manifests/calico.yaml
kubectl apply -f calico.yaml

马上执行kubectl get pods -n kube-system

root@master-k8s:~# kubectl get pods -n kube-system
NAME                                      READY   STATUS     RESTARTS   AGE
calico-kube-controllers-6b77fff45-fp2cj   0/1     Pending    0          8s
calico-node-9tf5w                         0/1     Init:0/2   0          8s
calico-node-dx5bq                         0/1     Init:0/2   0          8s
calico-node-x78f8                         0/1     Init:0/2   0          8s
coredns-6d8c4cb4d-6wpt2                   0/1     Pending    0          11m
coredns-6d8c4cb4d-dvqvj                   0/1     Pending    0          11m
etcd-master-k8s                           1/1     Running    0          11m
kube-apiserver-master-k8s                 1/1     Running    0          11m
kube-controller-manager-master-k8s        1/1     Running    0          11m
kube-proxy-87tbj                          1/1     Running    0          4m17s
kube-proxy-9w9lv                          1/1     Running    0          4m22s
kube-proxy-s2j4f                          1/1     Running    0          11m
kube-scheduler-master-k8s                 1/1     Running    0          11m

可以发现calico-node-*还是Init:0/2状态,等一段时间,比如20秒,再次执行以上命令

root@master-k8s:~# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6b77fff45-fp2cj   1/1     Running   0          2m59s
calico-node-9tf5w                         1/1     Running   0          2m59s
calico-node-dx5bq                         1/1     Running   0          2m59s
calico-node-x78f8                         1/1     Running   0          2m59s
coredns-6d8c4cb4d-6wpt2                   1/1     Running   0          14m
coredns-6d8c4cb4d-dvqvj                   1/1     Running   0          14m
etcd-master-k8s                           1/1     Running   0          14m
kube-apiserver-master-k8s                 1/1     Running   0          14m
kube-controller-manager-master-k8s        1/1     Running   0          14m
kube-proxy-87tbj                          1/1     Running   0          7m8s
kube-proxy-9w9lv                          1/1     Running   0          7m13s
kube-proxy-s2j4f                          1/1     Running   0          14m
kube-scheduler-master-k8s                 1/1     Running   0          14m

现在已经是Running状态了,使用kubectl get nodes查看节点状态,现在已经都是Ready状态了

root@master-k8s:~# kubectl get nodes
NAME         STATUS   ROLES                  AGE     VERSION
master-k8s   Ready    control-plane,master   15m     v1.23.0
node1-k8s    Ready    <none>                 7m44s   v1.23.0
node2-k8s    Ready    <none>                 7m39s   v1.23.0

root@master-k8s:~# kubectl cluster-info
Kubernetes control plane is running at https://172.30.70.63:6443
CoreDNS is running at https://172.30.70.63:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

至此,一个简单的k8s集群部署完毕。

©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容