欠你们的 → k8s 集群搭建,除夕奉上!(k8s集群搭建工具)

  本篇文章为你整理了欠你们的 → k8s 集群搭建,除夕奉上!(k8s集群搭建工具)的详细内容,包含有k8s集群有什么用 k8s集群搭建工具 k8s集群管理平台 k8s集群的工作原理 欠你们的 → k8s 集群搭建,除夕奉上!,希望能帮助你了解 欠你们的 → k8s 集群搭建,除夕奉上!。

   有一天,qq收到一个好友申请,验证消息上写的是:哥哥加我,我是妹妹

   我以为是性骚扰,就没加,直接回了一句:我喜欢少妇

   过了一会儿,姑姑就给我打了个电话:你妹妹qq加你,你怎么不同意,她想问你几道数学题,你说你喜欢少妇

   我:姑姑,你听我狡辩一下......

   祝大家除夕快乐!

   基于CentOS7准备 3 个节点:master:192.168.0.100、node1:192.168.0.101、192.168.0.102

   VirtualBox搭建虚拟机的过程就不演示了,具体可参考如下两篇

   virtualBox安装centos,并搭建tomcat

   VirtualBox 下 CentOS7 静态 IP 的配置 → 多次踩坑总结,蚌埠住了!

   搭建好之后 IP 分配如下

  Docker安装

   每个节点都需要安装Docker环境

   配置yum源

  

yum -y install yum-utils

 

  yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

  yum makecache fast

 

  View Code

   安装 启动 Docker

  

# 适配k8s版本

 

  yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6

  # 启动 Docker

  systemctl start docker

  # 开机启动 Docker

  systemctl enable docker

 

  View Code

   查看Docker版本

   配置加速

   因为有一面墙,国内访问国外资源速度太慢,很有可能下载资源失败,需要配置国内镜像地址

  

sudo mkdir -p /etc/docker

 

  sudo tee /etc/docker/daemon.json -EOF

   "registry-mirrors": ["https://xxxx.mirror.aliyuncs.com"],

   "exec-opts": ["native.cgroupdriver=systemd"],

   "log-driver": "json-file",

   "log-opts": {

   "max-size": "100m"

   "storage-driver": "overlay2"

  sudo systemctl daemon-reload

  sudo systemctl restart docker

 

  View Code

   其中xxxx需要改成你们自己的

   注意,是每个节点都需要配置Docker,而非某个节点

  K8S集群搭建

   基础环境

   所有节点都需要进行如下设置

  

# 将 SELinux 设置为 permissive 模式(相当于禁用)

 

  setenforce 0

  sed -i s/^SELINUX=enforcing$/SELINUX=permissive/ /etc/selinux/config

  # 关闭swap

  swapoff -a

  sed -ri s/.*swap.*/# / /etc/fstab

  # 允许 iptables 检查桥接流量

  cat EOF sudo tee /etc/modules-load.d/k8s.conf

  br_netfilter

  cat EOF sudo tee /etc/sysctl.d/k8s.conf

  net.bridge.bridge-nf-call-ip6tables = 1

  net.bridge.bridge-nf-call-iptables = 1

  sudo sysctl --system

 

  View Code

   在创建虚拟机的时候,楼主已经把hostname设置好了,如果你们没设置,可以通过如下指令进行域名设置

   安装kubelet、kubeadm、kubectl

   每个节点都需要进行安装

  

#每个节点执行

 

  cat EOF sudo tee /etc/yum.repos.d/kubernetes.repo

  [kubernetes]

  name=Kubernetes

  baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

  enabled=1

  gpgcheck=0

  repo_gpgcheck=0

  gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

   http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

  exclude=kubelet kubeadm kubectl

  # 与前面 Docker 的版本对应

  sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes

  sudo systemctl enable --now kubelet

 

  View Code

   kubeadm引导集群

   每个节点下载镜像

  

sudo tee ./images.sh -EOF

 

  #!/bin/bash

  images=(

  kube-apiserver:v1.20.9

  kube-proxy:v1.20.9

  kube-controller-manager:v1.20.9

  kube-scheduler:v1.20.9

  coredns:1.7.0

  etcd:3.4.13-0

  pause:3.2

  for imageName in ${images[@]} ; do

  docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName

  chmod +x ./images.sh ./images.sh

 

  View Code

   初始化主节点

   每个节点添加master域名映射

  

#所有机器添加master域名映射,以下需要修改为自己的

 

  echo "192.168.1.119 cluster-endpoint" /etc/hosts

 

  View Code

   只在master节点执行如下命令进行初始化

  

#主节点初始化

 

  kubeadm init \

  --apiserver-advertise-address=192.168.1.119 \

  --control-plane-endpoint=cluster-endpoint \

  --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \

  --kubernetes-version v1.20.9 \

  --service-cidr=10.96.0.0/16 \

  --pod-network-cidr=192.168.0.0/16

  #所有网络范围不重叠

 

  View Code

   当出现如下信息,则说明初始化成功

   如上信息中提到了几个点

   1、如果是常规用户,需要在主节点执行

   如果是root用户,则在主节点执行

   我们在主节点执行常规用户的命令

   2、需要部署网络组件

   3、其他节点如果执行如下命令,则作为master节点加入到集群中

   4、其他节点如果执行如下命令,则作为worker节点加入到集群中

   部署网络组件

   我们采用calico作为网络组件,在master执行如下命令

  

curl https://docs.projectcalico.org/v3.10/manifests/calico.yaml -O

 

  kubectl apply -f calico.yaml

 

  View Code

   因为是国外资源,calico.yaml可能下载失败,那么可以直接创建calico.yaml文件,然后将如下内容拷贝到文件中

  

---

 

  # Source: calico/templates/calico-config.yaml

  # This ConfigMap is used to configure a self-hosted Calico installation.

  kind: ConfigMap

  apiVersion: v1

  metadata:

   name: calico-config

   namespace: kube-system

  data:

   # Typha is disabled.

   typha_service_name: "none"

   # Configure the backend to use.

   calico_backend: "bird"

   # Configure the MTU to use

   veth_mtu: "1440"

   # The CNI network configuration to install on each node. The special

   # values in this config will be automatically populated.

   cni_network_config: -

   "name": "k8s-pod-network",

   "cniVersion": "0.3.1",

   "plugins": [

   "type": "calico",

   "log_level": "info",

   "datastore_type": "kubernetes",

   "nodename": "__KUBERNETES_NODE_NAME__",

   "mtu": __CNI_MTU__,

   "ipam": {

   "type": "calico-ipam"

   "policy": {

   "type": "k8s"

   "kubernetes": {

   "kubeconfig": "__KUBECONFIG_FILEPATH__"

   "type": "portmap",

   "snat": true,

   "capabilities": {"portMappings": true}

  # Source: calico/templates/kdd-crds.yaml

  apiVersion: apiextensions.k8s.io/v1beta1

  kind: CustomResourceDefinition

  metadata:

   name: felixconfigurations.crd.projectcalico.org

  spec:

   scope: Cluster

   group: crd.projectcalico.org

   version: v1

   names:

   kind: FelixConfiguration

   plural: felixconfigurations

   singular: felixconfiguration

  apiVersion: apiextensions.k8s.io/v1beta1

  kind: CustomResourceDefinition

  metadata:

   name: ipamblocks.crd.projectcalico.org

  spec:

   scope: Cluster

   group: crd.projectcalico.org

   version: v1

   names:

   kind: IPAMBlock

   plural: ipamblocks

   singular: ipamblock

  apiVersion: apiextensions.k8s.io/v1beta1

  kind: CustomResourceDefinition

  metadata:

   name: blockaffinities.crd.projectcalico.org

  spec:

   scope: Cluster

   group: crd.projectcalico.org

   version: v1

   names:

   kind: BlockAffinity

   plural: blockaffinities

   singular: blockaffinity

  apiVersion: apiextensions.k8s.io/v1beta1

  kind: CustomResourceDefinition

  metadata:

   name: ipamhandles.crd.projectcalico.org

  spec:

   scope: Cluster

   group: crd.projectcalico.org

   version: v1

   names:

   kind: IPAMHandle

   plural: ipamhandles

   singular: ipamhandle

  apiVersion: apiextensions.k8s.io/v1beta1

  kind: CustomResourceDefinition

  metadata:

   name: ipamconfigs.crd.projectcalico.org

  spec:

   scope: Cluster

   group: crd.projectcalico.org

   version: v1

   names:

   kind: IPAMConfig

   plural: ipamconfigs

   singular: ipamconfig

  apiVersion: apiextensions.k8s.io/v1beta1

  kind: CustomResourceDefinition

  metadata:

   name: bgppeers.crd.projectcalico.org

  spec:

   scope: Cluster

   group: crd.projectcalico.org

   version: v1

   names:

   kind: BGPPeer

   plural: bgppeers

   singular: bgppeer

  apiVersion: apiextensions.k8s.io/v1beta1

  kind: CustomResourceDefinition

  metadata:

   name: bgpconfigurations.crd.projectcalico.org

  spec:

   scope: Cluster

   group: crd.projectcalico.org

   version: v1

   names:

   kind: BGPConfiguration

   plural: bgpconfigurations

   singular: bgpconfiguration

  apiVersion: apiextensions.k8s.io/v1beta1

  kind: CustomResourceDefinition

  metadata:

   name: ippools.crd.projectcalico.org

  spec:

   scope: Cluster

   group: crd.projectcalico.org

   version: v1

   names:

   kind: IPPool

   plural: ippools

   singular: ippool

  apiVersion: apiextensions.k8s.io/v1beta1

  kind: CustomResourceDefinition

  metadata:

   name: hostendpoints.crd.projectcalico.org

  spec:

   scope: Cluster

   group: crd.projectcalico.org

   version: v1

   names:

   kind: HostEndpoint

   plural: hostendpoints

   singular: hostendpoint

  apiVersion: apiextensions.k8s.io/v1beta1

  kind: CustomResourceDefinition

  metadata:

   name: clusterinformations.crd.projectcalico.org

  spec:

   scope: Cluster

   group: crd.projectcalico.org

   version: v1

   names:

   kind: ClusterInformation

   plural: clusterinformations

   singular: clusterinformation

  apiVersion: apiextensions.k8s.io/v1beta1

  kind: CustomResourceDefinition

  metadata:

   name: globalnetworkpolicies.crd.projectcalico.org

  spec:

   scope: Cluster

   group: crd.projectcalico.org

   version: v1

   names:

   kind: GlobalNetworkPolicy

   plural: globalnetworkpolicies

   singular: globalnetworkpolicy

  apiVersion: apiextensions.k8s.io/v1beta1

  kind: CustomResourceDefinition

  metadata:

   name: globalnetworksets.crd.projectcalico.org

  spec:

   scope: Cluster

   group: crd.projectcalico.org

   version: v1

   names:

   kind: GlobalNetworkSet

   plural: globalnetworksets

   singular: globalnetworkset

  apiVersion: apiextensions.k8s.io/v1beta1

  kind: CustomResourceDefinition

  metadata:

   name: networkpolicies.crd.projectcalico.org

  spec:

   scope: Namespaced

   group: crd.projectcalico.org

   version: v1

   names:

   kind: NetworkPolicy

   plural: networkpolicies

   singular: networkpolicy

  apiVersion: apiextensions.k8s.io/v1beta1

  kind: CustomResourceDefinition

  metadata:

   name: networksets.crd.projectcalico.org

  spec:

   scope: Namespaced

   group: crd.projectcalico.org

   version: v1

   names:

   kind: NetworkSet

   plural: networksets

   singular: networkset

  # Source: calico/templates/rbac.yaml

  # Include a clusterrole for the kube-controllers component,

  # and bind it to the calico-kube-controllers serviceaccount.

  kind: ClusterRole

  apiVersion: rbac.authorization.k8s.io/v1

  metadata:

   name: calico-kube-controllers

  rules:

   # Nodes are watched to monitor for deletions.

   - apiGroups: [""]

   resources:

   - nodes

   verbs:

   - watch

   - list

   - get

   # Pods are queried to check for existence.

   - apiGroups: [""]

   resources:

   - pods

   verbs:

   - get

   # IPAM resources are manipulated when nodes are deleted.

   - apiGroups: ["crd.projectcalico.org"]

   resources:

   - ippools

   verbs:

   - list

   - apiGroups: ["crd.projectcalico.org"]

   resources:

   - blockaffinities

   - ipamblocks

   - ipamhandles

   verbs:

   - get

   - list

   - create

   - update

   - delete

   # Needs access to update clusterinformations.

   - apiGroups: ["crd.projectcalico.org"]

   resources:

   - clusterinformations

   verbs:

   - get

   - create

   - update

  kind: ClusterRoleBinding

  apiVersion: rbac.authorization.k8s.io/v1

  metadata:

   name: calico-kube-controllers

  roleRef:

   apiGroup: rbac.authorization.k8s.io

   kind: ClusterRole

   name: calico-kube-controllers

  subjects:

  - kind: ServiceAccount

   name: calico-kube-controllers

   namespace: kube-system

  # Include a clusterrole for the calico-node DaemonSet,

  # and bind it to the calico-node serviceaccount.

  kind: ClusterRole

  apiVersion: rbac.authorization.k8s.io/v1

  metadata:

   name: calico-node

  rules:

   # The CNI plugin needs to get pods, nodes, and namespaces.

   - apiGroups: [""]

   resources:

   - pods

   - nodes

   - namespaces

   verbs:

   - get

   - apiGroups: [""]

   resources:

   - endpoints

   - services

   verbs:

   # Used to discover service IPs for advertisement.

   - watch

   - list

   # Used to discover Typhas.

   - get

   - apiGroups: [""]

   resources:

   - nodes/status

   verbs:

   # Needed for clearing NodeNetworkUnavailable flag.

   - patch

   # Calico stores some configuration information in node annotations.

   - update

   # Watch for changes to Kubernetes NetworkPolicies.

   - apiGroups: ["networking.k8s.io"]

   resources:

   - networkpolicies

   verbs:

   - watch

   - list

   # Used by Calico for policy information.

   - apiGroups: [""]

   resources:

   - pods

   - namespaces

   - serviceaccounts

   verbs:

   - list

   - watch

   # The CNI plugin patches pods/status.

   - apiGroups: [""]

   resources:

   - pods/status

   verbs:

   - patch

   # Calico monitors various CRDs for config.

   - apiGroups: ["crd.projectcalico.org"]

   resources:

   - globalfelixconfigs

   - felixconfigurations

   - bgppeers

   - globalbgpconfigs

   - bgpconfigurations

   - ippools

   - ipamblocks

   - globalnetworkpolicies

   - globalnetworksets

   - networkpolicies

   - networksets

   - clusterinformations

   - hostendpoints

   - blockaffinities

   verbs:

   - get

   - list

   - watch

   # Calico must create and update some CRDs on startup.

   - apiGroups: ["crd.projectcalico.org"]

   resources:

   - ippools

   - felixconfigurations

   - clusterinformations

   verbs:

   - create

   - update

   # Calico stores some configuration information on the node.

   - apiGroups: [""]

   resources:

   - nodes

   verbs:

   - get

   - list

   - watch

   # These permissions are only requried for upgrade from v2.6, and can

   # be removed after upgrade or on fresh installations.

   - apiGroups: ["crd.projectcalico.org"]

   resources:

   - bgpconfigurations

   - bgppeers

   verbs:

   - create

   - update

   # These permissions are required for Calico CNI to perform IPAM allocations.

   - apiGroups: ["crd.projectcalico.org"]

   resources:

   - blockaffinities

   - ipamblocks

   - ipamhandles

   verbs:

   - get

   - list

   - create

   - update

   - delete

   - apiGroups: ["crd.projectcalico.org"]

   resources:

   - ipamconfigs

   verbs:

   - get

   # Block affinities must also be watchable by confd for route aggregation.

   - apiGroups: ["crd.projectcalico.org"]

   resources:

   - blockaffinities

   verbs:

   - watch

   # The Calico IPAM migration needs to get daemonsets. These permissions can be

   # removed if not upgrading from an installation using host-local IPAM.

   - apiGroups: ["apps"]

   resources:

   - daemonsets

   verbs:

   - get

  apiVersion: rbac.authorization.k8s.io/v1

  kind: ClusterRoleBinding

  metadata:

   name: calico-node

  roleRef:

   apiGroup: rbac.authorization.k8s.io

   kind: ClusterRole

   name: calico-node

  subjects:

  - kind: ServiceAccount

   name: calico-node

   namespace: kube-system

  # Source: calico/templates/calico-node.yaml

  # This manifest installs the calico-node container, as well

  # as the CNI plugins and network config on

  # each master and worker node in a Kubernetes cluster.

  kind: DaemonSet

  apiVersion: apps/v1

  metadata:

   name: calico-node

   namespace: kube-system

   labels:

   k8s-app: calico-node

  spec:

   selector:

   matchLabels:

   k8s-app: calico-node

   updateStrategy:

   type: RollingUpdate

   rollingUpdate:

   maxUnavailable: 1

   template:

   metadata:

   labels:

   k8s-app: calico-node

   annotations:

   # This, along with the CriticalAddonsOnly toleration below,

   # marks the pod as a critical add-on, ensuring it gets

   # priority scheduling and that its resources are reserved

   # if it ever gets evicted.

   scheduler.alpha.kubernetes.io/critical-pod:

   spec:

   nodeSelector:

   beta.kubernetes.io/os: linux

   hostNetwork: true

   tolerations:

   # Make sure calico-node gets scheduled on all nodes.

   - effect: NoSchedule

   operator: Exists

   # Mark the pod as a critical add-on for rescheduling.

   - key: CriticalAddonsOnly

   operator: Exists

   - effect: NoExecute

   operator: Exists

   serviceAccountName: calico-node

   # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force

   # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.

   terminationGracePeriodSeconds: 0

   priorityClassName: system-node-critical

   initContainers:

   # This container performs upgrade from host-local IPAM to calico-ipam.

   # It can be deleted if this is a fresh installation, or if you have already

   # upgraded to use calico-ipam.

   - name: upgrade-ipam

   image: calico/cni:v3.10.4

   command: ["/opt/cni/bin/calico-ipam", "-upgrade"]

   env:

   - name: KUBERNETES_NODE_NAME

   valueFrom:

   fieldRef:

   fieldPath: spec.nodeName

   - name: CALICO_NETWORKING_BACKEND

   valueFrom:

   configMapKeyRef:

   name: calico-config

   key: calico_backend

   volumeMounts:

   - mountPath: /var/lib/cni/networks

   name: host-local-net-dir

   - mountPath: /host/opt/cni/bin

   name: cni-bin-dir

   # This container installs the CNI binaries

   # and CNI network config file on each node.

   - name: install-cni

   image: calico/cni:v3.10.4

   command: ["/install-cni.sh"]

   env:

   # Name of the CNI config file to create.

   - name: CNI_CONF_NAME

   value: "10-calico.conflist"

   # The CNI network config to install on each node.

   - name: CNI_NETWORK_CONFIG

   valueFrom:

   configMapKeyRef:

   name: calico-config

   key: cni_network_config

   # Set the hostname based on the k8s node name.

   - name: KUBERNETES_NODE_NAME

   valueFrom:

   fieldRef:

   fieldPath: spec.nodeName

   # CNI MTU Config variable

   - name: CNI_MTU

   valueFrom:

   configMapKeyRef:

   name: calico-config

   key: veth_mtu

   # Prevents the container from sleeping forever.

   - name: SLEEP

   value: "false"

   volumeMounts:

   - mountPath: /host/opt/cni/bin

   name: cni-bin-dir

   - mountPath: /host/etc/cni/net.d

   name: cni-net-dir

   # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes

   # to communicate with Felix over the Policy Sync API.

   - name: flexvol-driver

   image: calico/pod2daemon-flexvol:v3.10.4

   volumeMounts:

   - name: flexvol-driver-host

   mountPath: /host/driver

   containers:

   # Runs calico-node container on each Kubernetes node. This

   # container programs network policy and routes on each

   # host.

   - name: calico-node

   image: calico/node:v3.10.4

   env:

   # Use Kubernetes API as the backing datastore.

   - name: DATASTORE_TYPE

   value: "kubernetes"

   # Wait for the datastore.

   - name: WAIT_FOR_DATASTORE

   value: "true"

   # Set based on the k8s node name.

   - name: NODENAME

   valueFrom:

   fieldRef:

   fieldPath: spec.nodeName

   # Choose the backend to use.

   - name: CALICO_NETWORKING_BACKEND

   valueFrom:

   configMapKeyRef:

   name: calico-config

   key: calico_backend

   # Cluster type to identify the deployment type

   - name: CLUSTER_TYPE

   value: "k8s,bgp"

   # Auto-detect the BGP IP address.

   - name: IP

   value: "autodetect"

   # Enable IPIP

   - name: CALICO_IPV4POOL_IPIP

   value: "Always"

   # Set MTU for tunnel device used if ipip is enabled

   - name: FELIX_IPINIPMTU

   valueFrom:

   configMapKeyRef:

   name: calico-config

   key: veth_mtu

   # The default IPv4 pool to create on startup if none exists. Pod IPs will be

   # chosen from this range. Changing this value after installation will have

   # no effect. This should fall within `--cluster-cidr`.

   - name: CALICO_IPV4POOL_CIDR

   value: "192.168.0.0/16"

   # Disable file logging so `kubectl logs` works.

   - name: CALICO_DISABLE_FILE_LOGGING

   value: "true"

   # Set Felix endpoint to host default action to ACCEPT.

   - name: FELIX_DEFAULTENDPOINTTOHOSTACTION

   value: "ACCEPT"

   # Disable IPv6 on Kubernetes.

   - name: FELIX_IPV6SUPPORT

   value: "false"

   # Set Felix logging to "info"

   - name: FELIX_LOGSEVERITYSCREEN

   value: "info"

   - name: FELIX_HEALTHENABLED

   value: "true"

   securityContext:

   privileged: true

   resources:

   requests:

   cpu: 250m

   livenessProbe:

   exec:

   command:

   - /bin/calico-node

   - -felix-live

   - -bird-live

   periodSeconds: 10

   initialDelaySeconds: 10

   failureThreshold: 6

   readinessProbe:

   exec:

   command:

   - /bin/calico-node

   - -felix-ready

   - -bird-ready

   periodSeconds: 10

   volumeMounts:

   - mountPath: /lib/modules

   name: lib-modules

   readOnly: true

   - mountPath: /run/xtables.lock

   name: xtables-lock

   readOnly: false

   - mountPath: /var/run/calico

   name: var-run-calico

   readOnly: false

   - mountPath: /var/lib/calico

   name: var-lib-calico

   readOnly: false

   - name: policysync

   mountPath: /var/run/nodeagent

   volumes:

   # Used by calico-node.

   - name: lib-modules

   hostPath:

   path: /lib/modules

   - name: var-run-calico

   hostPath:

   path: /var/run/calico

   - name: var-lib-calico

   hostPath:

   path: /var/lib/calico

   - name: xtables-lock

   hostPath:

   path: /run/xtables.lock

   type: FileOrCreate

   # Used to install CNI.

   - name: cni-bin-dir

   hostPath:

   path: /opt/cni/bin

   - name: cni-net-dir

   hostPath:

   path: /etc/cni/net.d

   # Mount in the directory for host-local IPAM allocations. This is

   # used when upgrading from host-local to calico-ipam, and can be removed

   # if not using the upgrade-ipam init container.

   - name: host-local-net-dir

   hostPath:

   path: /var/lib/cni/networks

   # Used to create per-pod Unix Domain Sockets

   - name: policysync

   hostPath:

   type: DirectoryOrCreate

   path: /var/run/nodeagent

   # Used to install Flex Volume Driver

   - name: flexvol-driver-host

   hostPath:

   type: DirectoryOrCreate

   path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds

  apiVersion: v1

  kind: ServiceAccount

  metadata:

   name: calico-node

   namespace: kube-system

  # Source: calico/templates/calico-kube-controllers.yaml

  # See https://github.com/projectcalico/kube-controllers

  apiVersion: apps/v1

  kind: Deployment

  metadata:

   name: calico-kube-controllers

   namespace: kube-system

   labels:

   k8s-app: calico-kube-controllers

  spec:

   # The controllers can only have a single active instance.

   replicas: 1

   selector:

   matchLabels:

   k8s-app: calico-kube-controllers

   strategy:

   type: Recreate

   template:

   metadata:

   name: calico-kube-controllers

   namespace: kube-system

   labels:

   k8s-app: calico-kube-controllers

   annotations:

   scheduler.alpha.kubernetes.io/critical-pod:

   spec:

   nodeSelector:

   beta.kubernetes.io/os: linux

   tolerations:

   # Mark the pod as a critical add-on for rescheduling.

   - key: CriticalAddonsOnly

   operator: Exists

   - key: node-role.kubernetes.io/master

   effect: NoSchedule

   serviceAccountName: calico-kube-controllers

   priorityClassName: system-cluster-critical

   containers:

   - name: calico-kube-controllers

   image: calico/kube-controllers:v3.10.4

   env:

   # Choose which controllers to run.

   - name: ENABLED_CONTROLLERS

   value: node

   - name: DATASTORE_TYPE

   value: kubernetes

   readinessProbe:

   exec:

   command:

   - /usr/bin/check-status

   - -r

  apiVersion: v1

  kind: ServiceAccount

  metadata:

   name: calico-kube-controllers

   namespace: kube-system

  # Source: calico/templates/calico-etcd-secrets.yaml

  # Source: calico/templates/calico-typha.yaml

  # Source: calico/templates/configure-canal.yaml

 

  View Code

   我们看下集群状态

   目前只有一个主节点,再看下pods状态

   都在运行中,状态都正常

   Worker 节点加入集群

   在k8snode1、k8snode2节点执行

   在master节点查看集群节点状态:kubectl get nodes

   再看下pods状态:kubectl get pods -A

   1、Docker 版本和k8s的版本最好对应上,否则容易出问题

   2、k8s网络有点复杂,感兴趣的可以仔细研究下

   云原生实战

   云原生Java架构师的第一课K8s+Docker+KubeSphere+DevOps

  以上就是欠你们的 → k8s 集群搭建,除夕奉上!(k8s集群搭建工具)的详细内容,想要了解更多 欠你们的 → k8s 集群搭建,除夕奉上!的内容,请持续关注盛行IT软件开发工作室。

郑重声明:本文由网友发布,不代表盛行IT的观点,版权归原作者所有,仅为传播更多信息之目的,如有侵权请联系,我们将第一时间修改或删除,多谢。

留言与评论(共有 条评论)
   
验证码: