Skip to main content

k8s资源例子

1.LimitRange限制每个Pod所用的内存资源

创建 LimitRange 和 Pod

先创建一个命名空间,可以使得见得的资源和集群的其余资源相隔离

kubectl create namespace mem

以下为 LimitRange 的示例清单 memory-defaults.yaml

apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 500Mi
- max:
memory: 1Gi
min:
memory: 500Mi
type: Container

在 default-mem-example 命名空间创建限制范围:

kubectl apply -f memory-defaults.yaml --namespace=mem

此时,如果在default-mem-example 命名空间中创建一个 Pod,如果没有申明自己的内存请求和限制,控制面就会将内存的默认请求值 500MiB 和默认限制值 512MiB 应用到 Pod 上,超过max或者小于min都会不允许创建pod

随便来个pod:memory-defaults-pod.yaml

apiVersion: v1
kind: Pod
metadata:
name: default-mem-demo
spec:
containers:
- name: default-mem-demo-ctr
image: nginx
resources:
limits:
memory: 512Mi
requests:
memory: 128Mi
kubectl apply -f memory-defaults-pod.yaml --namespace=mem

查看

kubectl get pod default-mem-demo --output=yaml --namespace=default-mem-example

显示256Mib请求和512Mib限制


2 LimitRange限制每个Pod所用的cpu资源

创建一个命名空间

kubectl create namespace cpu

创建LimitRange cpu-defaults.yaml

apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
spec:
limits:
- default:
cpu: 1
defaultRequest:
cpu: 0.5
- max:
cpu: 1
min:
cpu: "200m"
type: Container

创建申请cpu的pod

cpu-defaults-pod.yaml

apiVersion: v1
kind: Pod
metadata:
name: default-cpu-demo
spec:
containers:
- name: default-cpu-demo-ctr
image: nginx
resources:
limits:
cpu: "1"
requests:
cpu: 500m

其中1=1000m(1cpu=1000毫cpu)


3.限制命名空间cpu+memory总量

kubectl create namespace quota-mem-cpu

创建 ResourceQuota

quota-mem-cpu.yaml

apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
kubectl apply -f quota-mem-cpu.yaml -n=quota-mem-cpu

创建pod quota-mem-cpu-pod.yaml

apiVersion: v1
kind: Pod
metadata:
name: quota-mem-cpu-demo
spec:
containers:
- name: quota-mem-cpu-demo-ctr
image: nginx
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "600Mi"
cpu: "400m"
kubectl apply -f quota-mem-cpu-pod.yaml -n=quota-mem-cpu

查看

kubectl get resourcequota -n quota-mem-cpu mem-cpu-demo -o yaml

输出结果显示了配额以及有多少配额已经被使用。你可以看到 Pod 的内存和 CPU 请求值及限制值没有超过配额。

status:
hard:
limits.cpu: "2"
limits.memory: 2Gi
requests.cpu: "1"
requests.memory: 1Gi
used:
limits.cpu: 800m
limits.memory: 800Mi
requests.cpu: 400m
requests.memory: 600Mi

这个时候如果创建第二个pod

apiVersion: v1
kind: Pod
metadata:
name: quota-mem-cpu-demo-2
spec:
containers:
- name: quota-mem-cpu-demo-2-ctr
image: redis
resources:
limits:
memory: "1Gi"
cpu: "800m"
requests:
memory: "700Mi"
cpu: "400m"

在清单中,你可以看到 Pod 的内存请求为 700 MiB。 请注意新的内存请求与已经使用的内存请求之和超过了内存请求的配额: 600 MiB + 700 MiB > 1 GiB,会申请失败

4.显示命名空间下pod的配额

apiVersion: v1
kind: ResourceQuota
metadata:
name: pod-demo
spec:
hard:
pods: "2"

这个命名空间只最多能有两个pod


5. 在某个节点上发布一种自定义资源

开启代理,以便用curl发请求

kubectl proxy

curl --header "Content-Type: application/json-patch+json" \
--request PATCH \
--data '[{"op": "add", "path": "/status/capacity/example.com~1dongle", "value": "4"}]' \
http://localhost:8001/api/v1/nodes/<your-node-name>/status

:::notion 在前面的请求中,~1 为 patch 路径中 “/” 符号的编码 :::

查看一下节点资源设置情况(这里总共分配了4个dongle)

kubectl describe node <your-node-name>

接下来验证是否资源生效 extend.yaml

apiVersion: v1
kind: Pod
metadata:
name: extended-resource-demo
spec:
containers:
- name: extended-resource-demo-ctr
image: nginx
resources:
requests:
example.com/dongle: 3
limits:
example.com/dongle: 3

容器请求了 3 个 dongles。

尝试创建第二个 Pod extend2.yaml

apiVersion: v1
kind: Pod
metadata:
name: extended-resource-demo-2
spec:
containers:
- name: extended-resource-demo-2-ctr
image: nginx
resources:
requests:
example.com/dongle: 2
limits:
example.com/dongle: 2

这里超过了4个dongle的总数,因此这个pod无法被创建,一直pending


开启自动扩缩容dns的服务(这个deploment装了以后就能自动扩缩容core-dns了)

查看 dns服务名字

kubectl get deployment -l k8s-app=kube-dns --namespace=kube-system

输出类似这样

NAME      READY   UP-TO-DATE   AVAILABLE   AGE
...
coredns 2/2 2 2 ...
...

所以下面的target=就填写 Deployment/coredns

dns-autoscaler.yaml

kind: ServiceAccount
apiVersion: v1
metadata:
name: kube-dns-autoscaler
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:kube-dns-autoscaler
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list", "watch"]
- apiGroups: [""]
resources: ["replicationcontrollers/scale"]
verbs: ["get", "update"]
- apiGroups: ["apps"]
resources: ["deployments/scale", "replicasets/scale"]
verbs: ["get", "update"]
# 待以下 issue 修复后,请删除 Configmaps
# kubernetes-incubator/cluster-proportional-autoscaler#16
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:kube-dns-autoscaler
subjects:
- kind: ServiceAccount
name: kube-dns-autoscaler
namespace: kube-system
roleRef:
kind: ClusterRole
name: system:kube-dns-autoscaler
apiGroup: rbac.authorization.k8s.io

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-dns-autoscaler
namespace: kube-system
labels:
k8s-app: kube-dns-autoscaler
kubernetes.io/cluster-service: "true"
spec:
selector:
matchLabels:
k8s-app: kube-dns-autoscaler
template:
metadata:
labels:
k8s-app: kube-dns-autoscaler
spec:
priorityClassName: system-cluster-critical
securityContext:
seccompProfile:
type: RuntimeDefault
supplementalGroups: [ 65534 ]
fsGroup: 65534
nodeSelector:
kubernetes.io/os: linux
containers:
- name: autoscaler
image: registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4
resources:
requests:
cpu: "20m"
memory: "10Mi"
command:
- /cluster-proportional-autoscaler
- --namespace=kube-system
- --configmap=kube-dns-autoscaler
# `kubectl get deployment --n=kube-system` 查出来,这里叫coredns 所以就填Deploment/coredns
- --target=<SCALE_TARGET>
# 当集群使用大节点(有更多核)时,“coresPerReplica”应该占主导地位。
# 如果使用小节点,“nodesPerReplica“ 应该占主导地位。
- --default-params={"linear":{"coresPerReplica":256,"nodesPerReplica":16,"preventSinglePointFailure":true,"includeUnschedulableNodes":true}}
- --logtostderr=true
- --v=2
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
serviceAccountName: kube-dns-autoscaler