以太坊 跨域 Eclipse插件 阿里巴巴 dedecms Vue全家桶 oop events primefaces ppt视频教程下载 css面试题 oracle时间格式化 wps文件修复工具下载 python练习 python自学教程 python零基础教程 python匹配字符串 java中泛型 java实现栈 java特性 javaspring nginx安装教程 windows7loader 手机照片恢复免费软件 dg分区 cms教程 脚本大全 编辑软件 识别音乐的软件 mathcad15 魔兽改图工具 一羽月土米水日古余打一成语 polyworks 小米9截图 暗黑3挂机plusready 神魔辅助 语音转文字转换器 视频旋转软件 网易云听歌识曲电脑版 pr脱机文件怎么恢复
当前位置: 首页 > 学习教程  > 编程语言

从零开始的 Kubernetes 学习笔记(三)

2021/1/28 22:48:10 文章标签:

初始化集群 一、初始化集群 方法一: // 已经下载好 kubernetes 所需的镜像 # kubeadm init --kubernetes-version v1.20.2 --pod-network-cidr10.244.0.0/16方法二: // 在1.13版本之后可以指定 Image 地址 # kubeadm init \ --apiserver-advertise-a…

初始化集群

一、初始化集群

方法一:

// 已经下载好 kubernetes 所需的镜像
# kubeadm init --kubernetes-version v1.20.2 --pod-network-cidr=10.244.0.0/16

方法二:

// 在1.13版本之后可以指定 Image 地址
# kubeadm init \
--apiserver-advertise-address=192.168.10.101 \
--kubernetes-version v1.20.2 \
--pod-network-cidr=10.244.0.0/16 \
--image-repository registry.aliyuncs.com/google_containers

初始化参数说明
–apiserver-advertise-address
指明用 Master 的哪个 interface 与 Cluster 的其他节点通信。如果 Master 有多个 interface,建议明确指定,如果不指定,kubeadm 会自动选择有默认网关的 interface。
–kubernetes-version=v1.20.2
关闭版本探测,因为它的默认值是stable-1,会导致从https://dl.k8s.io/release/stable-1.txt下载最新的版本号,我们可以将其指定为固定版本来跳过网络请求。
–pod-network-cidr
指定 Pod 网络的范围。Kubernetes 支持多种网络方案,而且不同网络方案对 --pod-network-cidr 有自己的要求,这里设置为 10.244.0.0/16 是因为我们将使用 flannel 网络方案,必须设置成这个 CIDR。
–image-repository
Kubenetes默认Registries地址是 k8s.gcr.io,在国内并不能访问 gcr.io,在1.13版本中我们可以增加–image-repository参数,默认值是 k8s.gcr.io,将其指定为阿里云镜像地址:registry.aliyuncs.com/google_containers。

详细过程可参考

[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [debian1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.101]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [debian1 localhost] and IPs [192.168.10.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [debian1 localhost] and IPs [192.168.10.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 13.508699 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node debian1 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node debian1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: rrycf0.ompqbi1ptb2x08rn
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.101:6443 --token rrycf0.ompqbi1ptb2x08rn \
    --discovery-token-ca-cert-hash sha256:99cef00dcfa838f44f70d5b071d2bd41e8169a5f795414b247ef4084eabe5548

2. 根据提示创建目录

# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config

3. 应用网络模型

# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

小贴士:注意需要先应用网络模型再将 Node 添加到集群,否则会出现网络错误。如果无法访问可以使通过百度网盘下载到本地 https://pan.baidu.com/s/1gzX00mRBY0YPHoz3KerbiQ 提取码:yaml

二、添加工作节点

// 在其他的节点上执行 kubeadm init 后的提示Token
# kubeadm join 192.168.10.101:6443 --token rrycf0.ompqbi1ptb2x08rn \
    --discovery-token-ca-cert-hash sha256:99cef00dcfa838f44f70d5b071d2bd41e8169a5f795414b247ef4084eabe5548

若是忘记加入集群的token值时,可用 kubeadm token create --print-join-command 重新生成

出现一下内容证明成功

Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.

获取节点信息

# kubectl get nodes
NAME      STATUS   ROLES                  AGE     VERSION
debian1   Ready    control-plane,master   7m30s   v1.20.2
debian2   Ready    <none>                 79s     v1.20.2
debian3   Ready    <none>                 67s     v1.20.2

应用节点身份(将 bebian2、debian3 节点身份改为Node工作节点)

# kubectl label nodes debian2 node-role.kubernetes.io/node=node
node/debian2 labeled
# kubectl get nodes
NAME      STATUS   ROLES                  AGE     VERSION
debian1   Ready    control-plane,master   10m     v1.20.2
debian2   Ready    node                   3m49s   v1.20.2
debian3   Ready    <none>                 3m37s   v1.20.2

将 Docker、Kubetel 添加开机自启

# systemctl enable kubelet docker

# 移除节点

1.先在 master 执行
# kubectl drain [节点名] --delete-local-data --force --ignore-daemonsets
2.在 node 节点 清空配置
# kubeadm reset
# ifconfig flannel.1 down
# ip link delete flannel.1
3.在 master 上删除
# kubectl delete node [节点名]

故障排除

# 查看问题

// 查看 kubelet 日志,一般用于网络故障排除
# journalctl -f -u kubelet

# localhost:8080 was refused

// 如果看到以下报错
# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

是因为没有按照 kubeadm init 的指导步骤操作导致的,或者没在 Master 操作,需要操作:

# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config

# node flannel not have CIDR IPs

// 如果 flannel 虚拟网卡没有地址,类似于下部信息:
# ip a
4: flannel.1: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default 
    link/ether 5e:88:cf:b0:81:ea brd ff:ff:ff:ff:ff:ff

可能是因为初始化集群时没有指定 CIDR 导致的,重新初始化集群并添加参数 --pod-network-cidr=10.244.0.0/16。

附录

相关链接:

  • 从零开始的 Kubernetes 学习笔记(一)
  • 从零开始的 Kubernetes 学习笔记(二)

参考链接:

  • K8s coredns CrashLoopBackOff - 知乎
  • Enable kubeadm completion - kubernetes
  • Options for Highly Available topology - kubernetes

本文链接: http://www.dtmao.cc/news_show_650085.shtml

附件下载

相关教程

    暂无相关的数据...

共有条评论 网友评论

验证码: 看不清楚?