常见SEO从云服务器同步Ubuntu24.04基于Docker部署K8s(使用私服部
发布时间:2025-12-14 18:12:57 作者:熊猫主机教程网
简介 云服务器 哪些 最近开始正在学习和使用k8s,对几种安装方式都进行了测试,这次主要使用了离线镜像和软件源的方式来进行了安装。需要自己先对软件源和镜像源提前部署! 1.1 服务器初始化(所有服务器都需要) 1.1.1 apt源初始化root@k8smaster232:~ cat /etc/apt/sources.list.d/ubuntu.sourcesTypes: debURI
云服务器 哪些
最近开始正在学习和使用k8s,对几种安装方式都进行了测试,这次主要使用了离线镜像和软件源的方式来进行了安装。需要自己先对软件源和镜像源提前部署!
1.1 服务器初始化(所有服务器都需要)
1.1.1 apt源初始化
root@k8smaster232:~ cat /etc/apt/sources.list.d/ubuntu.sourcesTypes: debURIs: http://192.168.1.12:8081/repository/Ubuntu-Proxy/Suites: noble noble-updates noble-backportsComponents: main restricted universe multiverseSigned-By: /usr/share/keyrings/ubuntu-archive-keyring.gpgTypes: debURIs: http://192.168.1.12:8081/repository/Ubuntu-Proxy/Suites: noble-securityComponents: main restricted universe multiverseSigned-By: /usr/share/keyrings/ubuntu-archive-keyring.gpg1.1.2 Docker源初始化
代理源使用的ali,这里添加阿里云的源密钥root@k8smaster232:~curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg源内容root@k8smaster232:~cat /etc/apt/sources.list.d/docker.listdeb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] http://192.168.1.12:8081/repository/Ubuntu-Docker noble stable1.1.3 K8s源初始化
代理源使用的ali,这里添加阿里云的源密钥root@k8smaster232:~curl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.34/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg源内容root@master233:~cat /etc/apt/sources.list.d/kubernetes.listdeb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] http://192.168.1.12:8081/repository/Ubuntu-K8s/v1.34/deb//源内容(无gpgkey,当使用nginx作为apt源时可以使用)root@master233:~cat /etc/apt/sources.list.d/kubernetes.listdeb [trusted=yes] http://192.168.1.12:8081/repository/Ubuntu-K8s noble main1.1.4 确认源准备就绪
root@k8smaster232:~aptupdateHit:1http://192.168.1.12:8081/repository/Ubuntu-Docker noble InReleaseGet:2http://192.168.1.12:8081/repository/Ubuntu-K8s/v1.34/deb InRelease [1,186 B]Hit:3http://192.168.1.12:8081/repository/Ubuntu-Proxy noble InReleaseHit:4http://192.168.1.12:8081/repository/Ubuntu-Proxy noble-updates InReleaseHit:5http://192.168.1.12:8081/repository/Ubuntu-Proxy noble-backports InReleaseHit:6http://192.168.1.12:8081/repository/Ubuntu-Proxy noble-security InReleaseGet:7http://192.168.1.12:8081/repository/Ubuntu-K8s/v1.34/deb Packages [4,405 B]1.1.5 安装Docker,修改配置
root@k8smaster232:~apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin ca-certificates curl gnupg -yroot@k8smaster232:~vi /etc/docker/daemon.json修改配置文件{"insecure-registries": ["https://私服域名"] } root@k8smaster232:~systemctl restart docker1.1.6 修改hosts映射
root@k8smaster230:~cat>>/etc/hosts<<EOF192.168.1.230k8smaster230192.168.1.231k8sslave231192.168.1.232k8sslave232EOF1.1.7 时间同步
root@k8smaster230:~timedatectl set-timezone Asia/Shanghairoot@k8smaster230:~apt install ntpdate -yroot@k8smaster230:~ntpdate time1.aliyun.comroot@k8smaster230:~crontab -e00* * * ntpdate time1.aliyun.com1.1.8 禁用交换分区
root@k8smaster230:~swapoff -a && sudo sed -i /swap/s/^// /etc/fstab1.1.9 系统参数优化
root@k8smaster230:~vi /etc/security/limits.conf可打开的文件句柄最大数* soft nofile65535* hard nofile65535单个用户可用的最大进程数* soft nproc8192* hard nproc8192可打开的文件描述符的最大数,unlimited:无限制* soft memlock unlimited * hard memlock unlimited root@k8smaster230:~vim /etc/sysctl.confvm.max_map_count =262144root@k8smaster230:~sysctl -p1.1.10 调整内核参数
root@k8smaster230:~cat << EOF |tee /etc/modules-load.d/k8s.confoverlay br_netfilter EOF overlay br_netfilter加载root@k8smaster230:~modprobe overlayroot@k8smaster230:~modprobe br_netfilter查看是否成功root@k8smaster230:~lsmod | egrep "overlay"overlay2129920root@k8smaster230:~lsmod | egrep "br_netfilter"br_netfilter327680bridge4218881br_netfilter1.1.11 内核支持转发
阿里云服务器慢
root@k8smaster230:~cat << EOF | sudo tee /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables =1net.bridge.bridge-nf-call-iptables =1net.ipv4.ip_forward =1vm.swappiness=0EOF加载root@k8smaster230:~sysctl -p /etc/sysctl.d/k8s.confroot@k8smaster230:~sysctl --system查看是否加载成功root@k8smaster230:~sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward1.1.12 安装ipset和ipvsadm
云服务器 显卡
root@k8smaster230:~apt install ipset ipvsadm -y添加需要加载的模块root@k8smaster230:~cat<< EOF |tee /etc/modules-load.d/ipvs.confip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack EOF创建加载模块的脚本文件root@k8smaster230:~cat << EOF |tee ipvs.sh!/bin/shmodprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack EOF执行脚本root@k8smaster230:~bash ipvs.sh查看加载情况root@k8smaster230:~lsmod | grep ip_vsip_vs_sh122880ip_vs_wrr122880ip_vs_rr122880ip_vs2211846ip_vs_rr,ip_vs_sh,ip_vs_wrr nf_conntrack1966084xt_conntrack,nf_nat,xt_MASQUERADE,ip_vs nf_defrag_ipv6245762nf_conntrack,ip_vs libcrc32c122886nf_conntrack,nf_nat,btrfs,nf_tables,raid456,ip_vs1.1.13 安装cri-dockerd(二进制方式,目前2404不支持,使用1.1.14方式)
下载二进制文件root@k8smaster230:~ wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.21/cri-dockerd-0.3.21.amd64.tgz解压root@k8smaster230:~ tar zxf cri-dockerd-0.3.21.amd64.tgz安装criroot@k8smaster230:~ install -o root -g root -m 0755 ./cri-dockerd/cri-dockerd /usr/local/bin/cri-dockerd将cri交给systemcd管理,这里注意增加配置项 --pod-infra-container-image=registry.k8s.io/pause:3.9,可以修改为私服地址,软件的位置也要和实际安装位置对应root@k8smaster230:~ vi /etc/systemd/system/cri-docker.service[Unit] Description=CRI Interface for Docker Application Container Engine Documentation=https://docs.mirantis.com After=network-online.target firewalld.service docker.service Wants=network-online.target Requires=cri-docker.socket [Service] Type=notify ExecStart=/usr/local/bin/cri-dockerd --pod-infra-container-image=私服域名/google_containers/pause:3.10.1 --container-runtime-endpoint fd:// ExecReload=/bin/kill -s HUP $MAINPID TimeoutSec=0 RestartSec=2 Restart=alwaysNote that StartLimit* options were moved from "Service" to "Unit" in systemd 229.Both the old, and new location are accepted by systemd 229 and up, so using the old locationto make them work for either version of systemd.StartLimitBurst=3Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.Both the old, and new name are accepted by systemd 230 and up, so using the old name to makethis option work for either version of systemd.StartLimitInterval=60sHaving non-zero Limit*s causes performance problems due to accounting overheadin the kernel. We recommend using cgroups to do container-local accounting.LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinityComment TasksMax if your systemd version does not support it.Only systemd 226 and above support this option.TasksMax=infinity Delegate=yes KillMode=process [Install] WantedBy=multi-user.target将cri交给systemcd管理root@k8smaster230:~ vi /etc/systemd/system/cri-docker.socket[Unit] Description=CRI Docker Socket for the API PartOf=cri-docker.service [Socket] ListenStream=%t/cri-dockerd.sock SocketMode=0660 SocketUser=root SocketGroup=docker [Install] WantedBy=sockets.target加载并启动root@k8smaster230:~ systemctl daemon-reloadroot@k8smaster230:~ systemctl enable --now cri-docker.socket查看是否运行root@k8smaster230:~ ll /var/run/*.socksrw-rw---- 1 root docker 0 Nov 12 11:39 /var/run/cri-dockerd.sock= srw-rw---- 1 root docker 0 Nov 11 16:57 /var/run/docker.sock=1.1.14 安装cri-dockerd(自行编译,可正常使用)
下载配置go环境root@k8smaster230:~wget https://go.dev/dl/go1.25.4.linux-amd64.tar.gzroot@k8smaster230:~tar -C /usr/local -xzf go1.25.4.linux-amd64.tar.gzroot@k8smaster230:~vim /etc/profileexportPATH=$PATH:/usr/local/go/bin root@k8smaster230:~source /etc/profileroot@k8smaster230:~go versiongo1.23.3linux/amd64编译项目root@k8smaster230:~git clone https://github.com/Mirantis/cri-dockerd.gitroot@k8smaster230:~cd cri-dockerdroot@k8smaster230:~apt install makeroot@k8smaster230:~make cri-dockerdroot@k8smaster230:~install -o root -g root -m 0755 cri-dockerd /usr/local/bin/cri-dockerdroot@k8smaster230:~install packaging/systemd/* /etc/systemd/systemroot@k8smaster230:~sed -i -e s,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd, /etc/systemd/system/cri-docker.service修改cri-docker.service中的源root@k8smaster232:~/cri-dockerdvi /etc/systemd/system/cri-docker.serviceExecStart=/usr/local/bin/cri-dockerd --pod-infra-container-image=私服域名/google_containers/pause:3.10.1 --container-runtime-endpoint fd:// root@k8smaster230:~chmod -x /etc/systemd/system/cri-docker.serviceroot@k8smaster230:~chmod -x /etc/systemd/system/cri-docker.socketroot@k8smaster230:~systemctl daemon-reloadroot@k8smaster230:~systemctl enable --now cri-docker1.1.15 安装kubectl,kubeadm,kubelet
查看安装包的情况root@k8smaster230:~apt-cachepolicykubeadmkubectlkubeletkubeadm:Installed:1.34.1-1.1Candidate:1.34.1-1.1Version table:***1.34.1-1.1500500http://192.168.1.12:8081/repository/Ubuntu-K8s/v1.34/debPackages100/var/lib/dpkg/status1.34.0-1.1500500http://192.168.1.12:8081/repository/Ubuntu-K8s/v1.34/debPackageskubectl:Installed:1.34.1-1.1Candidate:1.34.1-1.1Version table:***1.34.1-1.1500500http://192.168.1.12:8081/repository/Ubuntu-K8s/v1.34/debPackages100/var/lib/dpkg/status1.34.0-1.1500500http://192.168.1.12:8081/repository/Ubuntu-K8s/v1.34/debPackageskubelet:Installed:1.34.1-1.1Candidate:1.34.1-1.1Version table:***1.34.1-1.1500500http://192.168.1.12:8081/repository/Ubuntu-K8s/v1.34/debPackages100/var/lib/dpkg/status1.34.0-1.1500500http://192.168.1.12:8081/repository/Ubuntu-K8s/v1.34/debPackagesroot@k8smaster230:~apt-getinstall-ykubelet=1.34.1-1.1kubeadm=1.34.1-1.1kubectl=1.34.1-1.1锁定版本root@k8smaster230:~apt-markholdkubeletkubeadmkubectl解锁root@k8smaster230:~apt-markunholdkubeletkubeadmkubectl修改kubelet配置,使用systemdroot@k8smaster230:~vi/etc/default/kubeletKUBELET_EXTRA_ARGS="--cgroup-driver=systemd"KUBELET_KUBEADM_ARGS="--container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock"设置开机启动root@k8smaster230:~systemctlenablekubelet1.2 K8s部署
1.2.1 生成k8s配置文件并修改(仅在Master上执行)
root@k8smaster230:~kubeadmconfigprintinit-defaults>kubeadm-config.yamlroot@k8smaster230:~vikubeadm-config.yamlapiVersion:kubeadm.k8s.io/v1beta4bootstrapTokens:-groups:-system:bootstrappers:kubeadm:default-node-tokentoken:abcdef.0123456789abcdefttl:24h0m0susages:-signing-authenticationkind:InitConfigurationlocalAPIEndpoint:修改为MasterIPadvertiseAddress:192.168.1.230bindPort:6443nodeRegistration:修改容器运行时为dockercriSocket:unix:///var/run/cri-dockerd.sockimagePullPolicy:IfNotPresentimagePullSerial:truemaster节点主机名name:k8smaster230taints:nulltimeouts:controlPlaneComponentHealthCheck:4m0sdiscovery:5m0setcdAPICall:2m0skubeletHealthCheck:4m0skubernetesAPICall:1m0stlsBootstrap:5m0supgradeManifests:5m0s---apiServer:{}apiVersion:kubeadm.k8s.io/v1beta4caCertificateValidityPeriod:87600h0m0scertificateValidityPeriod:8760h0m0scertificatesDir:/etc/kubernetes/pkiclusterName:kubernetescontrollerManager:{}dns:{}encryptionAlgorithm:RSA-2048etcd:local:dataDir:/var/lib/etcd镜像仓库地址imageRepository:私服域名/google_containerskind:ClusterConfiguration修改k8s版本kubernetesVersion:1.34.1networking:dnsDomain:cluster.localserviceSubnet:10.96.0.0/12增加pod网段podSubnet:10.244.0.0/16proxy:{}scheduler:{}1.2.2 查看并下载镜像(仅在Master上执行)
查看私有镜像否存在root@k8smaster230:~ kubeadm config images list --kubernetes-version=v1.34.1 --image-repository私服域名/google_containers私服域名/google_containers/kube-apiserver:v1.34.1私服域名/google_containers/kube-controller-manager:v1.34.1私服域名/google_containers/kube-scheduler:v1.34.1私服域名/google_containers/kube-proxy:v1.34.1私服域名/google_containers/coredns:v1.12.1私服域名/google_containers/pause:3.10.1私服域名/google_containers/etcd:3.6.4-0下载镜像root@k8smaster230:~ kubeadm config images pull --kubernetes-version=v1.34.1 --image-repository harbor.muscledog.top/google_containers --cri-socket unix:///var/run/cri-dockerd.sock1.2.3 初始化K8s(仅在Master上执行)
初始化root@k8smaster230:~kubeadm init --config kubeadm-config.yaml --upload-certs --v=9... Tostartusingyour cluster, you needtorun thefollowingasa regularuser: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id-u):$(id-g) $HOME/.kube/config Alternatively,ifyouarethe rootuser, you can run:exportKUBECONFIG=/etc/kubernetes/admin.conf You shouldnowdeploy a pod networktothe cluster. Run"kubectl apply -f [podnetwork].yaml"withoneofthe options listedat: https://kubernetes.io/docs/concepts/cluster-administration/addons/Thenyou canjoinanynumberofworker nodesbyrunning thefollowingoneachasroot: kubeadmjoin192.168.1.230:6443--token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:6938788da15217dffd8aa82c1f058da6681a323e26ccb3f7180679b56ad919ae...执行kubeconfig中的命令root@k8smaster232:~mkdir -p $HOME/.kuberoot@k8smaster232:~cp -i /etc/kubernetes/admin.conf $HOME/.kube/configroot@k8smaster232:~chown $(id -u):$(id -g) $HOME/.kube/config1.2.4 从节点加入主节点(仅在Slave执行)
从节点加入root@k8sslave231:~kubeadm join 192.168.1.230:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:6938788da15217dffd8aa82c1f058da6681a323e26ccb3f7180679b56ad919ae --cri-socket unix:///var/run/cri-dockerd.sock[preflight] Running pre-flight checks .... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of thenewsecure connection details. Runkubectl get nodesonthe control-plane to seethisnodejointhe cluster.从节点加入root@k8sslave232:~kubeadm join 192.168.1.230:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:6938788da15217dffd8aa82c1f058da6681a323e26ccb3f7180679b56ad919ae --cri-socket unix:///var/run/cri-dockerd.sock[preflight] Running pre-flight checks .... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of thenewsecure connection details. Runkubectl get nodesonthe control-plane to seethisnodejointhe cluster.1.3 部署K8s网络插件Calico
1.3.1 下载文件
root@k8smaster232:~wget https://raw.githubusercontent.com/projectcalico/calico/v3.31.0/manifests/tigera-operator.yamlroot@k8smaster232:~wget https://raw.githubusercontent.com/projectcalico/calico/v3.31.0/manifests/custom-resources.yaml1.3.2 修改tigera-operator.yaml文件
root@k8smaster232:~ vi tigera-operator.yaml ··· imagePullSecrets: - imagesets.operator.tigera.io - imagesets image: 私服域名/tigera/operator:v1.40.0imagePullPolicy: IfNotPresent ···1.3.3 修改custom-resources.yaml文件
root@k8smaster232:~vicustom-resources.yamlThis section includes base Calico installation configuration.For more information, see: https://docs.tigera.io/calico/latest/reference/installation/apioperator.tigera.io/v1.InstallationapiVersion:operator.tigera.io/v1kind:Installationmetadata:name:defaultspec:Configures Calico networking.calicoNetwork:ipPools:-name:default-ipv4-ippoolblockSize:26cidr:10.244.0.0/16这里要注意修改为你再k8s部署时pod的网段encapsulation:VXLANCrossSubnetnatOutgoing:EnablednodeSelector:all()registry:私服域名使用私服镜像---This section configures the Calico API server.For more information, see: https://docs.tigera.io/calico/latest/reference/installation/apioperator.tigera.io/v1.APIServerapiVersion:operator.tigera.io/v1kind:APIServermetadata:name:defaultspec:{}---Configures the Calico Goldmane flow aggregator.apiVersion:operator.tigera.io/v1kind:Goldmanemetadata:name:default---Configures the Calico Whisker observability UI.apiVersion:operator.tigera.io/v1kind:Whiskermetadata:name:default1.3.4 部署Calico
root@k8smaster232:~kubectlcreate-ftigera-operator.yamlnamespace/tigera-operatorcreatedserviceaccount/tigera-operatorcreatedclusterrole.rbac.authorization.k8s.io/tigera-operator-secretscreatedclusterrole.rbac.authorization.k8s.io/tigera-operatorcreatedclusterrolebinding.rbac.authorization.k8s.io/tigera-operatorcreatedrolebinding.rbac.authorization.k8s.io/tigera-operator-secretscreateddeployment.apps/tigera-operatorcreatedroot@k8smaster232:~kubectlcreate-fcustom-resources.yamlinstallation.operator.tigera.io/defaultcreatedapiserver.operator.tigera.io/defaultcreatedgoldmane.operator.tigera.io/defaultcreatedwhisker.operator.tigera.io/defaultcreatedroot@k8smaster232:~kubectlgetpods-ncalico-systemNAMEREADYSTATUSRESTARTSAGEcalico-apiserver-64c9c9bfcb-5s5h21/1Running07m8scalico-apiserver-64c9c9bfcb-x99pp1/1Running07m8scalico-kube-controllers-f976c8f55-l6nc71/1Running07m8scalico-node-kkkwc1/1Running07m8scalico-node-rjcxg1/1Running07m8scalico-node-wf85m1/1Running07m8scalico-typha-7966b97589-lncnh1/1Running07m2scalico-typha-7966b97589-nnc5z1/1Running07m8scsi-node-driver-6h9hx2/2Running07m8scsi-node-driver-hndsc2/2Running07m8scsi-node-driver-trgtj2/2Running07m8sgoldmane-84d5b8fbb5-wwnbl1/1Running07m8swhisker-547ff8b85b-hvqhk2/2Running088s阿里云服务器登密码
推荐阅读
- 云服务器无法连接华为服务器忘记IBMC管理平台密码怎么办? 2025-12-15 05:33:47
- 阿里云服务器目录阿里云:计划在青岛建设阿里巴巴创新产业基地 2025-12-15 05:23:41
- 远程登录云服务器“云港通·口岸智慧查验新模式”项目在青岛正式发布 2025-12-15 05:13:35
- ip访问云服务器阿里云AI青岛峰会举办,以人工智能“重构”青岛优势产业 2025-12-15 05:03:28
- 云服务器 测评阿里云将为山东港口做顶层设计,全国首个“智慧港口”项目再升级 2025-12-15 04:53:25

