[root@node-01 ~]# ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:26z6DcUarn7wP70dqOZA28td+K/erv7NlaJPLVE1BTA root@node-01 The key's randomart image is: +---[RSA 2048]----+ | E..o+| | . o| | . | | . . | | S o . | | .o X oo .| | oB +.o+oo.| | .o*o+++o+o| | .++o+Bo+=B*B| +----[SHA256]-----+
分发node-01的公钥,用于免密登录其他服务器
1
for n in `seq -w 01 06`;do ssh-copy-id node-$n;done
server apiserver01 10.31.90.201:6443 check port 6443 inter 5000 fall 5 server apiserver02 10.31.90.202:6443 check port 6443 inter 5000 fall 5 server apiserver03 10.31.90.203:6443 check port 6443 inter 5000 fall 5
[root@node-01 ~]# kubeadm init --config kubeadm-init.yaml [init] UsingKubernetes version: v1.13.2 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating"ca" certificate and key [certs] Generating"apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [node-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.12.0.110.31.90.20110.31.90.200] [certs] Generating"apiserver-kubelet-client" certificate and key [certs] Generating"etcd/ca" certificate and key [certs] Generating"etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [node-01 localhost] and IPs [10.31.90.201127.0.0.1 ::1] [certs] Generating"etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [node-01 localhost] and IPs [10.31.90.201127.0.0.1 ::1] [certs] Generating"etcd/healthcheck-client" certificate and key [certs] Generating"apiserver-etcd-client" certificate and key [certs] Generating"front-proxy-ca" certificate and key [certs] Generating"front-proxy-client" certificate and key [certs] Generating"sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing"admin.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing"kubelet.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing"controller-manager.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing"scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 22.503955 seconds [uploadconfig] storing the configuration used in ConfigMap"kubeadm-config" in the "kube-system"Namespace [kubelet] Creating a ConfigMap"kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [patchnode] Uploading the CRISocket information "/var/run/dockershim.sock" to the NodeAPI object "node-01" as an annotation [mark-control-plane] Marking the node node-01 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node node-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: abcdef.0123456789abcdef [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBACRoles [bootstraptoken] configured RBAC rules to allow NodeBootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a NodeBootstrapToken [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info"ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [addons] Applied essential addon: kube-proxy
YourKubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
You should now deploy a pod network to the cluster. Run"kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node as root:
[root@node-01 ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} [root@node-01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION node-01 NotReady master14m v1.13.2