[toc]

kubernetes 操作记录(一)


kubernetes 有两种部署方式,其中一种方式是将kubernetes 每个组件都以系统进程的方式运行成系统层面的服务;这样的部署繁琐而复杂,当然也可用别人写的ansible自动化工具推送一次;
另外一部署部署方式是就是用 kubeadm 将 Kuberntes 每个组件都Pod形势进行部署;

使用 kubeadm 集群部署 kubernetes

  1. 节点网络 10.1.87.0/24
  2. Pod网络 10.244.0.0/16
  3. Service网络 10.96.0.0/12

部署准备

kubeadm 需要每个节点都安装 kubelte,docker 而把其中一个节点初始化为master;
其kuberntes 自己的各个组件都运行为Pod,其中的这些Pod都是静态Pod;

kubeadm
1、 master,nodes: 安装kubelet,kubeadm,docker
2、 master: kubeadm init
3、 nodes: kubeadm join
https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.10.md

节点服务器共四台,/etc/hosts配置信息如下

1
2
3
4
10.1.87.80  master
10.1.87.81 node01
10.1.87.82 node02
10.1.87.83 node03

注: 四台节点服务器时间需要同步

配置 kubernetes 及 docker 的Yum源

1
2
3
4
5
6
7
8
9
10
11
12
Master:配置
# vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=kebernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
gpgcheck=1
enabled=1
# cd /etc/yum.repos.d/ && wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# yum clean all && yum makecache
同步Yum配置到其它节点
# for i in {1..3} ;do scp docker-ce.repo kubernetes.repo node0$i:/etc/yum.repos.d/ ; done

四台服务器安装以下软件包

1
2
3
4
# yum -y install docker-ce kubelet kubeadm kubectl
出现gpgkey问题,通过以下方式解决
# wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
# rpm --import rpm-package-key.gpg
1
2
3
4
5
6
7
8
master
# vim /usr/lib/systemd/system/docker.service
在[service]下添加以下内容
Environment="HTTPS_PROXY=http://www.ik8s.io:10080"
Environment="NO_PROXY=127.0.0.0/8, 10.1.87.0/24"
# systemctl daemon-reload
# systemctl restart docker
# docker info

确保以下两个内核参数都是开启状态

1
2
3
4
# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
1
# cat /proc/sys/net/bridge/bridge-nf-call-iptables
1

查看kubelet 所生成的文件

1
2
3
4
5
6
7
# rpm -ql kubelet
/etc/kubernetes/manifests
/etc/sysconfig/kubelet
/usr/bin/kubelet
/usr/lib/systemd/system/kubelet.service
# systemctl enable kubelet
# systemctl enable docker

或者用脚本下载,并修改标签

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# vim docker_install_kubelet_image.sh
#!/bin/bash
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.14.0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.14.0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.14.0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.14.0 k8s.gcr.io/kube-apiserver:v1.14.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.14.0 k8s.gcr.io/kube-controller-manager:v1.14.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.14.0 k8s.gcr.io/kube-scheduler:v1.14.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.0 k8s.gcr.io/kube-proxy:v1.14.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1

查看 kubeadm init 初始化集群的帮助信息

1
# kubeadm init --help

kubernetes 初始化

1
2
3
# vim  /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
# kubeadm init --kubernetes-version=v1.14.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=swap
1
2
3
4
5
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubeadm join 10.1.87.80:6443 --token 7pr4nt.q2vfoir7qia0vrcd \
--discovery-token-ca-cert-hash sha256:7e38f83642e4633a48efa1bd2bdc3cd2523e83736091b38ead58c88530758bdc
1
2
3
4
5
# kubectl get cs (componentstatus)
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
1
2
3
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 23m v1.14.1

部署flannel

1
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

这时再看kubernetes集群已经运行起来了

1
2
3
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 27m v1.14.1

查看Kubernetes的名称空间

1
2
3
4
5
6
# kubectl get ns
NAME STATUS AGE
default Active 28m
kube-node-lease Active 28m
kube-public Active 28m
kube-system Active 28m

其它node{1..3}节点加入kubernetes 集群

在node{1..3} 分别执行以下命令

1
2
# systemctl start docker  && systemctl enable docker && systemctl enable kubelet
# kubeadm join 10.1.87.80:6443 --token 7pr4nt.q2vfoir7qia0vrcd --discovery-token-ca-cert-hash sha256:7e38f83642e4633a48efa1bd2bdc3cd2523e83736091b38ead58c88530758bdc --ignore-preflight-errors=swap

注: 这里需要手动去下载指定的镜像

1
2
3
4
5
6
# vim  docker_install_join_kublet_image.sh
#!/bin/bash
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.0 k8s.gcr.io/kube-proxy:v1.14.0

将这个脚本同步到其它所有从服务器并执行

1
2
3
4
# scp docker_install_join_kublet_image.sh  node02:/root/
# scp docker_install_join_kublet_image.sh node03:/root/
# 其余节点执行
# sh docker_install_join_kublet_image.sh
1
2
3
4
5
6
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node01 Ready <none> 74m v1.14.1
node02 Ready <none> 74m v1.14.1
node03 Ready <none> 71m v1.14.1
master Ready master 16h v1.14.

至此kubernetes 集群已经初始化完成

kubernetes 应用快速入门

描述一个节点

1
# kubectl describe node node01

查看kubernetes 集群信息

1
2
# kubectl version
# kubectl cluster-info

创建一个nginx Pod

1
# kubectl run nginx-deploy --image=nginx:1.14-alpine --port=80 --replicas=1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# curl 10.244.3.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

当把pod 手动删除时,会重新创建,因为首次创建pod 时指定了 replicas=1

1
# kubectl delete pod nginx-deploy-55d8d67cf-qwlc2
1
2
3
# kubectl get pods  -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deploy-55d8d67cf-qj9z8 1/1 Running 0 5m44s 10.244.1.2 node01 <none> <none>

暴露服务端口

1
2
3
4
5
6
# kubectl expose deployment nginx-deploy --name=nginx --port=80 --target-port=80 --protocol=TCP
service/nginx exposed
# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d16h
nginx ClusterIP 10.96.181.70 <none> 80/TCP 52s

在集群内部访问 10.96.181.70

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# curl 10.96.181.70
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
You have new mail in /var/spool/mail/root

查看 kube-system (kube-dns) 的CLUSTER-IP

1
2
3
# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 2d16h

创建客户端 Pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
# kubectl run client --image=busybox --replicas=1 -it --restart=Never
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local localdomain
options ndots:5
# dig -t A nginx.default.svc.cluster.local @10.96.0.10

; <<>> DiG 9.9.4-RedHat-9.9.4-73.el7_6 <<>> -t A nginx.default.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19937
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;nginx.default.svc.cluster.local. IN A

;; ANSWER SECTION:
nginx.default.svc.cluster.local. 5 IN A 10.96.181.70

;; Query time: 1 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Sun Apr 28 10:41:56 CST 2019
;; MSG SIZE rcvd: 107

/ # wget -O - -q nginx
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
1
# kubectl run myapp --image=ikubernetes/myapp:v1 --replicas=2
1
2
3
4
# kubectl get deployment -w
NAME READY UP-TO-DATE AVAILABLE AGE
myapp 1/2 2 1 70s
nginx-deploy 1/1 1 1 60m
1
# kubectl expose deployment myapp --name=myapp --port=80

两个pod 之间随机调度

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
/ # wget -O - -q myapp
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
/ # wget -O - -q myapp/hostname.html
myapp-5bc569c47d-fhf7w
/ # wget -O - -q myapp/hostname.html
myapp-5bc569c47d-fhf7w
/ # wget -O - -q myapp/hostname.html
myapp-5bc569c47d-fhf7w
/ # wget -O - -q myapp/hostname.html
myapp-5bc569c47d-fhf7w
/ # wget -O - -q myapp/hostname.html
myapp-5bc569c47d-fhf7w
/ # wget -O - -q myapp/hostname.html
myapp-5bc569c47d-xnslk
/ # wget -O - -q myapp/hostname.html
myapp-5bc569c47d-xnslk
/ # wget -O - -q myapp/hostname.html
myapp-5bc569c47d-fhf7w

修改pod数量

1
# kubectl scale --replicas=5 deployment myapp
1
2
3
4
5
6
7
8
9
# kubectl get pods
NAME READY STATUS RESTARTS AGE
client 1/1 Running 0 35m
myapp-5bc569c47d-fhf7w 1/1 Running 0 22m
myapp-5bc569c47d-mqzdc 1/1 Running 0 112s
myapp-5bc569c47d-w55fb 1/1 Running 0 13m
myapp-5bc569c47d-xkk2x 1/1 Running 0 112s
myapp-5bc569c47d-xnslk 1/1 Running 0 22m
nginx-deploy-55d8d67cf-v85hb 1/1 Running 0 32m
1
2
3
4
5
6
7
8
9
10
11
12
# while sleep 1 ; do wget -O - -q myapp/hostname.html ;done
myapp-5bc569c47d-mqzdc
myapp-5bc569c47d-mqzdc
myapp-5bc569c47d-xkk2x
myapp-5bc569c47d-xnslk
myapp-5bc569c47d-w55fb
myapp-5bc569c47d-mqzdc
myapp-5bc569c47d-w55fb
myapp-5bc569c47d-fhf7w
myapp-5bc569c47d-xnslk
myapp-5bc569c47d-xkk2x
myapp-5bc569c47d-xnslk

滚动更新

1
2
# kubectl set image deployment myapp myapp=ikubernetes/myapp:v2
deployment.extensions/myapp image updated

实时监控滚动更新

1
# kubectl rollout status deployment myapp
1
2
3
4
5
6
7
# while sleep 1 ; do wget -O - -q myapp ;done
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

回滚操作

1
2
# kubectl rollout undo deployment myapp
deployment.extensions/myapp rolled back
1
2
3
4
5
/ # while sleep 1 ; do wget -O - -q myapp ;done
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

以myapp 为例,在集群外部进行访问

1
2
3
# kubectl edit svc myapp
type: ClusterIp 改为以下
type: NodePort
1
2
3
4
5
# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d17h
myapp NodePort 10.100.210.54 <none> 80:32672/TCP 46m
nginx ClusterIP 10.96.181.70 <none> 80/TCP 77m

客户端外部访问

1
2
3
4
5
6
7
~ ➤ while sleep 1 ; do curl http://10.1.87.80:32672/hostname.html ; done
myapp-86984b4c7c-2vpjq
myapp-86984b4c7c-2vpjq
myapp-86984b4c7c-vw5md
myapp-86984b4c7c-vj7qm
myapp-86984b4c7c-vj7qm
myapp-86984b4c7c-2vpjq

此时便可以使用keepalived + nginx(或haproxy)等实现负载均衡

资源定义清单入门

定义一个简单的资源清单

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# vim pod-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-demo
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
- name: busybox
image: busybox:latest
command:
- "/bin/sh"
- "-c"
- "sleep 5000"
1
2
3
4
# kubectl create -f pod-demo.yaml
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATE
pod-demo 1/2 Running 5 4m19s 10.244.3.124 node03 <none> <none>

查看相关Pod相关信息

1
# kubectl describe pod-demo

Pod控制器应用进阶

-L 选项 用于指定显示指定资源对象类别所有资源对应标签的值

1
2
3
4
5
6
7
# kubectl  get pods -L app
NAME READY STATUS RESTARTS AGE APP
client 1/1 Running 0 8d
myapp-86984b4c7c-rf4lz 1/1 Running 0 27h
myapp-86984b4c7c-wss2h 1/1 Running 0 28h
nginx-deploy-55d8d67cf-v85hb 1/1 Running 0 8d
pod-demo 2/2 Running 121 7d2h myapp

-l 获取标签,做标签过滤

1
2
3
# kubectl  get pods -l app --show-labels
NAME READY STATUS RESTARTS AGE LABELS
pod-demo 2/2 Running 121 7d2h app=myapp,tier=frontend

显示多个标签的标签值

1
2
3
4
5
6
7
# kubectl  get pods -L app,run
NAME READY STATUS RESTARTS AGE APP RUN
client 1/1 Running 0 8d client
myapp-86984b4c7c-rf4lz 1/1 Running 0 27h myapp
myapp-86984b4c7c-wss2h 1/1 Running 0 28h myapp
nginx-deploy-55d8d67cf-v85hb 1/1 Running 0 8d nginx-deploy
pod-demo 2/2 Running 121 7d2h myapp

pod-demo 再次打标签

1
2
3
4
5
# kubectl label pods pod-demo release=canary
pod/pod-demo labeled
# kubectl get pods -l app --show-labels;
NAME READY STATUS RESTARTS AGE LABELS
pod-demo 2/2 Running 121 7d2h app=myapp,release=canary,tier=frontend

如果已有标签强行打标的话会报错

1
2
# kubectl label pods pod-demo release=stable
error: 'release' already has a value (canary), and --overwrite is false

这个时候需要加上 –overwrite

1
2
# kubectl label pods pod-demo release=stable --overwrite
pod/pod-demo labeled

查看既有release标签的又有app标签的Pod

1
2
3
# kubectl get pods -l release,app
NAME READY STATUS RESTARTS AGE
pod-demo 2/2 Running 121 7d2h

给nginx-deploy Pod打标签 release=canary

1
2
# kubectl label pods nginx-deploy-55d8d67cf-v85hb release=canary
pod/nginx-deploy-55d8d67cf-v85hb labeled

查看 标签release=canary的Pod

1
2
3
4
5
6
# kubectl  get pods -l release
NAME READY STATUS RESTARTS AGE
nginx-deploy-55d8d67cf-v85hb 1/1 Running 0 8d
pod-demo 2/2 Running 121 7d3h
# kubectl get pods -l release=canary
NAME READY STATUS RESTARTS AGE

标签选择器多条件选择

1
2
3
# kubectl get pods -l release=stable,app=myapp
NAME READY STATUS RESTARTS AGE
pod-demo 2/2 Running 121 7d3h
1
2
3
4
5
6
# kubectl get pods -l release!=stable
NAME READY STATUS RESTARTS AGE
client 1/1 Running 0 8d
myapp-86984b4c7c-rf4lz 1/1 Running 0 27h
myapp-86984b4c7c-wss2h 1/1 Running 0 28h
nginx-deploy-55d8d67cf-v85hb 1/1 Running 0 8d
1
2
3
4
5
6
7
8
9
# kubectl get pods -l "release in (canary,beta,alpha)"
NAME READY STATUS RESTARTS AGE
nginx-deploy-55d8d67cf-v85hb 1/1 Running 0 8d
# kubectl get pods -l "release notin (canary,beta,alpha)"
NAME READY STATUS RESTARTS AGE
client 1/1 Running 0 8d
myapp-86984b4c7c-rf4lz 1/1 Running 0 27h
myapp-86984b4c7c-wss2h 1/1 Running 0 28h
pod-demo 2/2 Running 121 7d3h
1
2
3
4
5
6
# kubectl  get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
node01 Ready <none> 10d v1.14.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux
node02 Ready <none> 10d v1.14.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux
node03 Ready <none> 10d v1.14.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node03,kubernetes.io/os=linux
master Ready master 10d v1.14.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/master=

给node01 打额外标签,磁盘类型有固态硬盘

1
2
3
4
# kubectl label nodes node01  disktype=ssd
# kubectl get nodes node01 --show-labels
NAME STATUS ROLES AGE VERSION LABELS
node01 Ready <none> 10d v1.14.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux

这样做的好处: 当节点有标签后,随后添加资源时就可以对节点有倾向性

如下,创建Pod时指定nodeSelector

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
apiVersion: v1
kind: Pod
metadata:
name: pod-demo
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: busybox
image: busybox:latest
imagePullPolicy: IfNotPresent
command:
- "/bin/sh"
- "-c"
- "sleep 5000"
nodeSelector:
disktype: ssd

nodeSelector disktype: ssd
nodeName 指定直接运行在哪个节点上

1
2
3
4
5
# kubectl describe pods pod-demo
可以查看以下信息,确定pod-demo 运行在node01上面
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m46s default-scheduler Successfully assigned default/pod-demo to node01

annotations 添加注解

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
apiVersion: v1
kind: Pod
metadata:
name: pod-demo
namespace: default
labels:
app: myapp
tier: frontend
annotations:
ssjinyao.com/create-by: "cluster admin"
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: busybox
image: busybox:latest
imagePullPolicy: IfNotPresent
command:
- "/bin/sh"
- "-c"
- "sleep 5000"
nodeSelector:
disktype: ssd
1
2
3
# kubectl create -f pod-demo.yaml
# kubectl describe pods pod-demo
Annotations: ssjinyao.com/create-by: cluster admin

ExecAction 用自定义的命令存活性探测

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# vim liveness-exec.yaml
apiVersion: v1
kind: Pod
metadata:
name: liveness-exec-pod
namespace: default
spec:
containers:
- name: liveness-exec-container
image: busybox:latest
imagePullPolicy: IfNotPresent
command: ["/bin/sh" , "-c", "touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 3600"]
livenessProbe:
exec:
command: ["test","-e","/tmp/healthy"]
initialDelaySeconds: 1
periodSeconds: 3
# kubectl create -f liveness-exec.yaml
# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
client 1/1 Running 0 10d
liveness-exec-pod 1/1 Running 0 47s
myapp-86984b4c7c-rf4lz 1/1 Running 0 3d4h
myapp-86984b4c7c-wss2h 1/1 Running 0 3d5h
nginx-deploy-55d8d67cf-v85hb 1/1 Running 0 10d
pod-demo 2/2 Running 34 2d
liveness-exec-pod 1/1 Running 1 69s
liveness-exec-pod 1/1 Running 2 2m19s
liveness-exec-pod 1/1 Running 3 3m28s

这个时候liveness-exec-pod 会不断因存活性探测而重启

基于HTTPGetAction探测

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# vim liveness-httpget.yaml
# cat liveness-httpget.yaml
apiVersion: v1
kind: Pod
metadata:
name: liveness-httpget-pod
namespace: default
spec:
containers:
- name: liveness-httpget-container
image: ikubernetes/myapp:v1
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
livenessProbe:
httpGet:
port: http
path: /index.html
initialDelaySeconds: 1
periodSeconds: 3
# kubectl create -f liveness-httpget.yaml
# kubectl exec liveness-httpget-pod -it -- /bin/sh
# rm -f /usr/share/nginx/html/index.html
# kubectl get pods
NAME READY STATUS RESTARTS AGE
client 1/1 Running 0 10d
liveness-httpget-pod 1/1 Running 2 6m31s

就续状态检查,如果pod不就续则不向外提供服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# vim readiness-httpget.yaml
apiVersion: v1
kind: Pod
metadata:
name: readiness-httpget-pod
namespace: default
spec:
containers:
- name: readiness-httpget-container
image: ikubernetes/myapp:v1
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
readinessProbe:
httpGet:
port: http
path: /index.html
initialDelaySeconds: 1
periodSeconds: 3
# kubectl exec readiness-httpget-pod -it -- /bin/sh
/ # rm -f /usr/share/nginx/html/index.html
# kubectl get pods
NAME READY STATUS RESTARTS AGE
readiness-httpget-pod 0/1 Running 0 2m52s

可以看到readinss-httpget-pod 是不就续的

1
2
3
4
5
# kubectl exec readiness-httpget-pod -it -- /bin/sh
/ # echo "test html" > /usr/share/nginx/html/index.html
# kubectl get pods
NAME READY STATUS RESTARTS AGE
readiness-httpget-pod 1/1 Running 0 4m30s

当创建就绪探测HTTPGet页面文件时, pod 就绪状态立马恢复;

Pod生命周期行为,启动前钩子,终止前钩子 lifecycle postStart postStop;

注: 启动容器时先执行command 再执行postStart因此 command命令不能强依赖于postStart执行结果;

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# vim poststart-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: poststart-pod
namespace: default
spec:
containers:
- name: busybox-httpd
image: busybox:latest
ports:
- containerPort: 80
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
exec:
command: ["/bin/sh","-c","echo 'welcome www.ssjinyao.com' > /tmp/index.html"]
command: ["/bin/sh"]
args: ["-c","httpd -h /tmp && sleep 300000" ]

评论