3.安装docker-ce-stable 官方文档写了建议安装18.06.2,其他版本的docker支持的不太好 On each of your machines, install Docker. Version 18.06.2 is recommended, but 1.11, 1.12, 1.13, 17.03 and 18.09 are known to work as well. Keep track of the latest verified Docker version in the Kubernetes release notes.
sudo yum list docker-ce.x86_64 --showduplicates |sort -r 选择docker-ce-18.06.1.ce-3.el7版
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
将这些镜像导出并压缩,传输回国内。http方式多线程传输最快。IDM64线程能跑满带宽😂,不到一分钟就下载到本地。然后再scp传输回国内的云服务器上。grep B是为了过滤掉输出结果第一行显示的 REPOSITORY TAG IMAGE ID CREATED SIZE😂 在使用docker save的时候,要指定镜像的名称,不要指定镜像的ID,不然你装载镜像的时候全是node的镜像,是启动不起来的😥 ps:第一次我使用的是docker save $(docker images -q)导出了所有的镜像。在装入镜像的时候发现镜像NAME全是node😂。使用docker images | grep B | cut -d ' ' -f1过滤出的是带NAME的镜像。
docker save -o k8s.tar $(docker images | grep B | cut -d ' ' -f1) | gzip k8s.tar k8s.tar.gz
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: ******** [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
╭─root@k8s-master ~ ╰─# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 38d9c698ec37 efb3887b411d "kube-controller-man…" 7 minutes ago Up 7 minutes k8s_kube-controller-manager_kube-controller-manager-k8s-master_kube-system_f423ac50e24b65e6d66fe37e6d721912_0 c273979e75b6 8931473d5bdb "kube-scheduler --bi…" 7 minutes ago Up 7 minutes k8s_kube-scheduler_kube-scheduler-k8s-master_kube-system_f44110a0ca540009109bfc32a7eb0baa_0 71f1f40dfa9e cfaa4ad74c37 "kube-apiserver --ad…" 7 minutes ago Up 7 minutes k8s_kube-apiserver_kube-apiserver-k8s-master_kube-system_d57282173a211f69b917251534760047_0 37636f04f5d6 2c4adeb21b4f "etcd --advertise-cl…" 7 minutes ago Up 7 minutes k8s_etcd_etcd-k8s-master_kube-system_dcd3914b600c5e8e86b2026688cc6dc5_0 48fc68b067de k8s.gcr.io/pause:3.1 "/pause" 7 minutes ago Up 7 minutes k8s_POD_kube-scheduler-k8s-master_kube-system_f44110a0ca540009109bfc32a7eb0baa_0 3c9f8e8224cf k8s.gcr.io/pause:3.1 "/pause" 7 minutes ago Up 7 minutes k8s_POD_kube-apiserver-k8s-master_kube-system_d57282173a211f69b917251534760047_0 b4903d8f18ee k8s.gcr.io/pause:3.1 "/pause" 7 minutes ago Up 7 minutes k8s_POD_kube-controller-manager-k8s-master_kube-system_f423ac50e24b65e6d66fe37e6d721912_0 f6d2cd0b03cd k8s.gcr.io/pause:3.1 "/pause" 7 minutes ago Up 7 minutes k8s_POD_etcd-k8s-master_kube-system_dcd3914b600c5e8e86b2026688cc6dc5_0 74a3699833bc 20a2d7035165 "/usr/local/bin/kube…" 9 minutes ago Up 4 seconds k8s_kube-proxy_kube-proxy-g4nd4_kube-system_afc4ba92-7657-11e9-b684-2aabd22d242a_1 ba61bed68ecc k8s.gcr.io/pause:3.1 "/pause" 9 minutes ago Up 9 minutes k8s_POD_kube-proxy-g4nd4_kube-system_afc4ba92-7657-11e9-b684-2aabd22d242a_4