k8s集群部署手册
k8s集群部署手册
安装 kubeadm
开始前的准备
- Firewalld and Selinux disabled.
- Swap disabled.
- Add hosts entries.
- Unique hostname, MAC address, and product_uuid for every node.
让iptables处理桥接流量
Make sure that the br_netfilter
module is loaded. This can be done by running lsmod | grep br_netfilter
. To load it explicitly call sudo modprobe br_netfilter
.
$ cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
$ sysctl --system
加载overlay
, br_netfilter
模块
$ cat <<EOF | tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
$ cat <<EOF | tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
$ sysctl --system
安装
创建Docker & Kubernetes repo.
$ cat <<EOF | tee /etc/yum.repos.d/docker.repo
[docker-ce-stable]
name=Docker CE Stable - \$basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/\$releasever/\$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
EOF
$ cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
EOF
安装 docker-ce.
$ yum -y install docker-ce
变更配置
$ mkdir /etc/docker && \
cat <<EOF | tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
$ systemctl enable docker
$ systemctl daemon-reload
$ systemctl restart docker
安装kubeadm,kubelet,kubectl
$ yum -y install kubeadm kubelet kubectl
$ systemctl enable kubelet.service
使用kubeadm创建和加入集群
安装 Kubernetes控制节点
导出默认配置
kubeadm config print init-defaults > init.config.yaml
下载kubeadm镜像.
kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers
安装Kubernetes控制节点.
kubeadm init \
--apiserver-advertise-address 192.168.13.200 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version 1.23.1 \
--pod-network-cidr 10.244.0.0/16
控制节点初始化成功
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.20:6443 --token rc7jvc.l2msicvo5j3rp0ei \
--discovery-token-ca-cert-hash sha256:ec1c2765528309495366e89b8eb7755831a237caa10503a83eb4dd2b812efd33
安装一个网络pod
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
⚠️ Cluster DNS (CoreDNS) will not start up before a network is installed
加入节点
导出默认配置
kubeadm config print join-defaults > join.config.yaml
生成token,有效期24小时
[root@master ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
4xs74j.w3bz3nm8kofveh5f 23h 2021-05-17T21:14:50+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
在控制节点创建token
[root@master ~]# kubeadm token create --print-join-command
kubeadm join 10.10.77.240:6443 --token 1eatf0.xut4j9rkbw0rhu4a --discovery-token-ca-cert-hash sha256:e60d1cdd4e9841ba5630072ad874ff58799bce5fa7506af23c6ef25118c121f6
重新生产token --discovery-token-ca-cert-hash
[root@master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'
检验集群状态
[root@master ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
[root@master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-f9fd979d6-dlvn4 1/1 Running 0 6m24s
coredns-f9fd979d6-rpwjm 1/1 Running 0 6m24s
etcd-kube-master 1/1 Running 0 6m34s
kube-apiserver-kube-master 1/1 Running 0 6m34s
kube-controller-manager-kube-master 1/1 Running 0 3m40s
kube-flannel-ds-872tz 1/1 Running 0 30s
kube-flannel-ds-wxcrv 1/1 Running 0 2m32s
kube-flannel-ds-xg2rm 1/1 Running 0 32s
kube-proxy-kcjj6 1/1 Running 0 30s
kube-proxy-m4ndt 1/1 Running 0 32s
kube-proxy-pgmx4 1/1 Running 0 6m24s
kube-scheduler-kube-master 1/1 Running 0 3m56s
- Control-plane node
Protocol | Direction | Port Range | Purpose | Used By |
---|---|---|---|---|
TCP | Inbound | 6443* | Kubernetes API server | All |
TCP | Inbound | 2379-2380 | etcd server client API | kube-apiserver, etcd |
TCP | Inbound | 10250 | kubelet API | Self, Control plane |
TCP | Inbound | 10251 | kube-scheduler | Self |
TCP | Inbound | 10252 | kube-controller-manager | Self |
- Worker-plane node
Protocol | Direction | Port Range | Purpose | Used By |
---|---|---|---|---|
TCP | Inbound | 10250 | kubelet API | Self, Control plane |
TCP | Inbound | 30000-32767 | NodePort Services† | All |
Clean up
If you used disposable servers for your cluster, for testing, you can switch those off and do no further clean up. You can use kubectl config delete-cluster
to delete your local references to the cluster.
However, if you want to deprovision your cluster more cleanly, you should first drain the node and make sure that the node is empty, then deconfigure the node.
Remove the node
Talking to the control-plane node with the appropriate credentials, run:
kubectl drain <node name> --delete-emptydir-data --force --ignore-daemonsets
Before removing the node, reset the state installed by kubeadm
:
kubeadm reset
The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
If you want to reset the IPVS tables, you must run the following command:
ipvsadm -C
Now remove the node:
kubectl delete node <node name>
If you wish to start over, run kubeadm init
or kubeadm join
with the appropriate arguments.
Clean up the control plane
You can use kubeadm reset
on the control plane host to trigger a best-effort clean up.
See the kubeadm reset
reference documentation for more information about this subcommand and its options.