k8s之认证-授权-准入控制

k8s之认证-授权-准入控制

what`s 访问控制

api-server是k8s集群各个组件、scheduler、controller-manager、kubelet、kube-proxy、etcd、以及用户账户user、pod进程账户serviceaccount通信的唯一入口,是集群的网关;

​ ks8的访问控制分为3个阶段:由api-server实现,插件机制,

  1. 用户认证,确保该用户有权限连入api-server;所有插件通过一个就放行
  2. 用户鉴权,识别该用户对哪些用户、有哪些操作权限;所有插件通过一个就放行
  3. 准入控制,对用户操作进行细粒度的补充,如用默认值填充用户未定义的字段;所有插件需全部通过才可放行

图示:

image-20201123151735032

用户账户

通过api-server操作k8s集群一般有3种途径:发起操作的对象有人、pod对象,分别对应user、serviceaccount2类账户

  • kubectl命令行客户端,或图形客户端
  • 客户端接口库
  • 对接口发起rest的http请求

​ 用户账户分2类:

  • 人用的:useraccount;集群级别资源,
  • pod用的:serviceaccount;名称空间级别资源

对于k8s集群来说,api-server作为唯一的集群网关,其接收着,来自用户的、其他组件的、运行其上的pod的所有的api请求,每个api请求都要经过api-server的认证鉴权准入控制的检测步骤;在认证验证api请求来自哪个用户身份时,就分为了2类,一类是人类用户user、一类是pod用户serviceaccount、

api请求会附带其用户身份信息供api-server验证,如某个user身份、某个serviceaccount身份、

无论是user账户还是sa账户(serviceaccount),都可以属于某个用户组,该组上添加的权限,会被组内成员自动继承,因此方便了某一类用户的批量授予某些权限或取消

用户组

​ 用户账户分组有4类:

  • system:authenticated 所有通过api-server认证的账户,user或sa,自动归为自类
  • system:unauthenticated 未通过api-server认证的账户,user或sa,
  • system:serviceaacounts 当前集群下所有sa账户所在的组
  • system:serviceaccounts: 指定名称空间下所有sa账户所在组

无论是谁发起的,每个api请求要么携带user人类身份、要么携带serviceaccount pod对象的身份,且需通过认证,否则就归为匿名请求,为匿名用户身份

认证、授权、准入控制

认证

​ 认证时,api-server会从客户端提供的认证信息中,抽取出username、uid、groups、extra的信息,用做后续标识该用户的信息

​ 认证客户端身份时,api-server可支持客户端多种认证方式,常见的有:每种认证方式由一种认证的插件来实现,至少k8s集群应启用user和sa认证插件,因为此2种方式为客户端最常用

  • x509客户端证书
  • 静态令牌文件static token file
  • 引导令牌bootstrap token,用于新节点加入时对新node的身份认证
  • 静态密码文件
  • serviceaccount令牌
  • openID连接令牌
  • webhook令牌
  • 认证代理
  • 外部keystone服务器认证
  • 匿名请求

授权

​ 经过前一步的身份认证后,就要由鉴权插件确认该用户对哪些资源、具有哪些操作权限,场景鉴权插件

  • node,对kubelet的访问控制
  • abac,基于属性的访问控制
  • rbac,基于角色的访问控制
  • webhook,基于http回调机制,借助外部rest服务检查权限的访问控制
  • alwaysdeny
  • alwaysallow,2类特殊插件

准入控制

​ 经过身份认证、用户鉴权后,对于有写操作的请求,还会经过准入控制插件的拦截,在写入etcd集群前,对写操作进行更细粒度的检查、补充等操作,常见插件:

  • alwaysallow
  • alwaysdeny
  • alwayspullimage
  • namespacelifecycle
  • limitRanger,资源限制方面
  • serviceaccount,为pod对象自动关联sa账户
  • resourceQuota
  • ...

ps:只读的api请求,不经过准入控制插件检查

服务账户serviceaccount

​ 运行中,pod可能会需要通过api-server发起api请求,访问集群中其他服务,这些服务会要求验证身份,因此pod就需要提供自己的身份信息给要请求的服务方。该信息就由pod携带的serviceaccount提供,通常包含用户名,和相关的secret对象

​ 如:监控用pod,需要请求各个节点的kubelet提供监控数据,就需要提供sa账户给kubelet认证,且该账户还需要有获得节点监控数据的权限

serviceaccount自动化

​ 未明确定义sa账户的pod都会被分配一个默认的sa账户

1、查看某运行中pod信息

[root@client ~]# kubectl get pods ngx1 -o yaml
apiVersion: v1
kind: Pod
metadata:
...
  serviceAccount: default
  serviceAccountName: default
...
volumeMounts:
    - mountPath: /usr/share/nginx/html/
      name: nginx
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-q6vpk
      readOnly: true


...
  volumes:
  - name: default-token-q6vpk
    secret:
      defaultMode: 420
      secretName: default-token-q6vpk
1,定义了一个secret类型的存储卷,其中,引用了名为default-token-q6vpk的secret对象
2,在容器中引用,并挂载到了容器中/var/run/secrets/kubernetes.io/serviceaccount目录下,只读权限
3、secret对象中存储了认证需要的信息
4,由准入控制器补全的默认的sa账户,为当前名称空间下的default账户

2、查看对应的secret对象

[root@client ~]# kubectl get secret default-token-q6vpk
NAME                  TYPE                                  DATA   AGE
default-token-q6vpk   kubernetes.io/service-account-token   3      13d

1,其中提供了ca.crt,namespace,token三个字段的信息
[root@client ~]# kubectl get secret default-token-q6vpk -o yaml
apiVersion: v1
data:
  ca.crt: LS0tLS1C...
  namespace: ZGVmYXVsdA==
  token: ZXlKaGJHY2...
kind: Secret

3、查看挂载到容器中的文件

​ 可以看到,secret中定义的3个字段,被挂载成为了容器中的三个文件

[root@client ~]# kubectl exec  ngx1 -- ls /var/run/secrets/kubernetes.io/serviceaccount
ca.crt
namespace
token
[root@client ~]# kubectl exec  ngx1 -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
-----BEGIN CERTIFICATE-----
MIICyDCCAbCgAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl
cm5ldGVzMB4XDTIwMTExMDA2NTAwM1oXDTMwMTEwODA2NTAwM1owFTETMBEGA1UE
AxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALue
E6S/sjrJd0CKIQjTK2wLdOSo+Q9HMiJQLvwq6A3daLLqaIGo0FRcvVIWtwwW78oO
SXKJWqImUQsthj+Fhuy8QFaaGIJgoQnSL5VYDzDRLaRN6lg0fjOYIFB055QVQ54R
apbpgW/N9BwrTiOQmGSBVWJt7SCb9Mz1ngw4FWSErLUpatFQ9id9AGa+5+H1XxO1
eiq6MPyejZ4Cfy+w3LCeDLwOizFPdfCP9t0HXYwgQkTOS8WfuyBTa4YCvQX2cM47
eGxJVB+R0aKOOW9pdtqiIanfYwwIf+oxE+CBYO+KIzWgNnD2wTbjsmebk8KoOk4o
vgbNPIjeQboEIDKHyskCAwEAAaMjMCEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB
/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAI0HpRWd4ZsEHJe6Ef4cWrTX/gSv
Sbss22pqoZF6EURSpxBMlYa5DFAHBpRqBcWVig4KViCSl+hhSLRvOF8He6nlkxvc
bsKJT9c4eyFI+dOfEaAZUuhmUddSg72KcJpA+3dCnnYVvGI3gFfydaM8nkUr3Pm1
88Ixm66VCB/+IFzW88mHY9t3s9/v1aVizlAq3kknTZt7pwvNTirCD39I5PpBfPXL
/XuzUMBJ9jO4Goi+fwue+yjpOanDQcGCOX2LO7MoafKgMb1Dm3+MsfdNRtjS9FNS
aFzXCk/+V73U2BHb0qvFMaS4aO5RGJvUzGuzEpoAxDKUWA+Oudo4bI6XWBI=
-----END CERTIFICATE-----
[root@client ~]# kubectl exec  ngx1 -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace
default[root@client ~]#

4、查看名称空间下生成的默认sa账户

​ 其中引用了包含认证信息的secret对象

每个ns中,都会生成一个default的sa账户,生成一个default-token-xxx的secret对象,并2者关联起来

[root@client ~]# kubectl get sa -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: ServiceAccount
  metadata:
    creationTimestamp: 2020-11-10T06:50:41Z
    name: default
    namespace: default
    resourceVersion: "333"
    selfLink: /api/v1/namespaces/default/serviceaccounts/default
    uid: 0caad4cf-2321-11eb-8d73-000c292d5d7c
  secrets:
  - name: default-token-q6vpk
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

[root@client ~]# kubectl get sa -o wide
NAME      SECRETS   AGE
default   1         13d
[root@client ~]# kubectl get secret -o wide
NAME                  TYPE                                  DATA   AGE
default-token-q6vpk   kubernetes.io/service-account-token   3      13d

5、sa账户自动化实现过程,(由3个组件协同实现)

  • sa账户的控制器
  • token的控制器
  • sa的准入控制器
  1. sa账户控制器,负责在每个ns中生成一个默认的sa账户
  2. token控制器,负责监控sa账户的生成,并为其添加secret对象,其中存储了认证相关信息
  3. sa的准入控制器,
    1. 创建pod的请求来临时,未指定sa账户的补上pod所在ns中默认的sa账户,名为default
    2. 指定sa账户的pod,检查引用sa是否存在,不存在报错

6、指定给sa的token签名的密钥对

  • controller-manager启动时指定--service-account-private-key-file给sa的token签名的私钥
  • api-server指定--service-account-key-file指定验证sa的token时的公钥,验证sa的token的合法性

创建serviceaccount

​ 创建pod时,可以用spec.serviceAccount字段指定sa,没指定的就是该ns中默认的sa账户,名为default;

​ 命令行或yaml清单均可创建sa账户,创建sa账户后,由controller-manager其中的子组件:token控制器监测到,然后自动生成一个token添加到一个secret对象中,并关联到创建的sa账户上;

1、查看ns中默认的sa账户,每个ns都有一个名为default的sa账户;

[root@client ~]# kubectl get sa   --all-namespaces |grep default
NAMESPACE       NAME                                 SECRETS   AGE
default         default                              1         13d
ingress-nginx   default                              1         5d4h
kube-public     default                              1         13d
kube-system     default                              1         13d
test            default                              1         5d3

2、基于yaml文件创建sa账户

[root@client sa]# kubectl apply -f sa-demo1.yaml 
serviceaccount/da-demo1 created
[root@client sa]# kubectl get sa
NAME       SECRETS   AGE
da-demo1   1         3s
default    1         13d
[root@client sa]# cat sa-demo1.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
 name: da-demo1

定义sa名称和名称空间即可,其中的secret会自动被token控制器创建;名字格式:SA_NAME-token-xxx
当然,可以手动指定secret对象,
以及,imagePullSecrets对象,私有仓库的认证信息
[root@client sa]# kubectl get secrets
NAME                   TYPE                                  DATA   AGE
da-demo1-token-b6tln   kubernetes.io/service-account-token   3      50s


sa引用imagepullSecret

​ sa的spec.imagePullSecrets字段,通过指定docker-registry类型的secret,之后,引用该sa账户的pod,可以从对应的私有镜像仓库拉取镜像,而无需手动在节点上先docker login登陆

​ docker-registry类型的secret需要事先创建,其中加密了私有仓库的地址,用户名,密码,用户邮箱信息

[root@client ~]# kubectl explain sa.imagePullSecrets
KIND:     ServiceAccount
VERSION:  v1
...

---
apiVersion: v1
kind: ServiceAccount
metadata:
 name: da-demo1
imagePullSecrets:
- name: some-regstiry-secrets

x509数字证书认证

tls/ssl单向认证图示:

​ 只有客户验证服务端的证书,确保服务端的安全性

image-20201123183508821

tls/ssl双向认证图示:

​ 客户端/服务端相互验证证书,确保双方均为安全用户,

image-20201123183519258

k8s中的tls/ssl认证

k8s中各组件之间、客户端和api-server之间,api-server和etcd集群之间,均需要安全通信,且都是双向安全通信,即客户端,服务端都需要提供对方自己的证书,并验证对方的证书;(基本由一个私有ca签发即可)

k8s集群安全通信举例:

  • api-server和其他组件
    • controller-manager
    • scheduler
  • api-server和其客户端之间
    • kubectl或gui客户端
    • 各节点:kubelet和kube-proxy,其中新节点加入集群的时候,可自动生成私钥和证书请求文件csr,提交给api-server,并由master节点签发证书,这过程叫做:tls bootstraping
  • etcd集群节点之间,2380端口
  • etcd集群和其客户端api-server,2379端口

image-20201123183841417

客户端配置kubeconfig

kubectl config命令

kubeconfig命令格式:用户配置kubeconfig配置文件;

[root@client sa]# kubectl config --help
Modify kubeconfig files using subcommands like "kubectl config set current-context my-context" 

Available Commands:
  current-context Displays the current-context
  delete-cluster  Delete the specified cluster from the kubeconfig
  delete-context  Delete the specified context from the kubeconfig
  get-clusters    Display clusters defined in the kubeconfig
  get-contexts    Describe one or many contexts
  rename-context  Renames a context from the kubeconfig file.
  set             Sets an individual value in a kubeconfig file
  set-cluster     Sets a cluster entry in kubeconfig
  set-context     Sets a context entry in kubeconfig
  set-credentials Sets a user entry in kubeconfig
  unset           Unsets an individual value in a kubeconfig file
  use-context     Sets the current-context in a kubeconfig file
  view            Display merged kubeconfig settings or a specified kubeconfig file

kubeconfig文件格式

kubeconfig配置文件,为api-server各类客户端可使用的连接到api-server的认证配置文件,组成格式有4部分:可以定义多个集群和用户,并组合为不同的上下文,利用use-context可以实现在不同集群、不同认证用户之间快速切换;

  • clusters,定义集群信息,url,名称,
  • users,定义用户信息,名称
  • contexts:集群信息和用户的组合,
  • current-context:当前默认使用哪个用户连接哪个集群
[root@client sa]# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.80.101:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

创建kubeconfig文件

1、创建用户、用户私钥、用户证书

创建普通用户:

[root@master kube_config]# useradd kube_user1
[root@master kube_config]# su - kube_user1
[kube_user1@master ~]$ ll
total 0

[kube_user1@master ~]$ (umask 077; openssl genrsa -out kube_user1.key 2048)
Generating RSA private key, 2048 bit long modulus
............................................+++
................................................+++
e is 65537 (0x10001

生成key和csr文件:subj就是身份认证时,用户的标识id,分别为用户名、用户组名

[kube_user1@master ~]$ openssl req -new -key kube_user1.key -out kube_user1.csr -subj "/CN=kube_user1/O=k8s"
[kube_user1@master ~]$ ll
total 8
-rw-rw-r-- 1 kube_user1 kube_user1  911 Nov 23 19:24 kube_user1.csr
-rw------- 1 kube_user1 kube_user1 1679 Nov 23 19:24 kube_user1.key

切换为root用户,然后签发证书,用kubeadm部署集群时生成的k8s集群的ca证书

[root@master kube_config]# openssl x509 -req -in /home/kube_user1/kube_user1.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out kube_user1.crt -days 3650
Signature ok
subject=/CN=kube_user1/O=k8s
Getting CA Private Key

复制到kube_user1用户家目录

[root@master kube_config]# chown -R kube_user1.kube_user1 /home/kube_user1/kube_user1.crt 
[root@master kube_config]# ll /home/kube_user1/
total 12
-rw-r--r-- 1 kube_user1 kube_user1  997 Nov 23 19:26 kube_user1.crt
-rw-rw-r-- 1 kube_user1 kube_user1  911 Nov 23 19:24 kube_user1.csr
-rw------- 1 kube_user1 kube_user1 1679 Nov 23 19:24 kube_user1.key

2、配置为kubeconfig文件

设置集群信息字段:

[kube_user1@master ~]$ kubectl config set-cluster kubernetes --embed-certs=true \
> --certificate-authority=/etc/kubernetes/pki/ca.crt --server="https://192.168.80.101:6443"
Cluster "kubernetes" set.

设置用户信息字段:

[kube_user1@master ~]$ kubectl config set-credentials kube_user1 --embed-certs=true --client-certificate=/home/kube_user1/kube_user1.crt \
> --client-key=/home/kube_user1/kube_user1.key 
User "kube_user1" set.

设置context字段:

[kube_user1@master ~]$ kubectl config set-context kube_user1@kubernetes --cluster=kubernetes --user=kube_user1
Context "kube_user1@kubernetes" created.

设置current-context字段:

[kube_user1@master ~]$ kubectl config use-context kube_user1@kubernetes
Switched to context "kube_user1@kubernetes".

查看:

[kube_user1@master ~]$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.80.101:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kube_user1
  name: kube_user1@kubernetes
current-context: kube_user1@kubernetes
kind: Config
preferences: {}
users:
- name: kube_user1
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

3、使用上步的kubeconfig文件访问测试

​ 访问被拒绝,因为没有设置对应的权限,但证明kubeconfig文件配置成功;

[kube_user1@master ~]$ kubectl get pods
Error from server (Forbidden): pods is forbidden: User "kube_user1" cannot list resource "pods" in API group "" in the namespace "default

​ 每个用户默认读取kubeconfig文件路径:$HOME/.kube/config,也可以指定具体采用的kubeconfig文件具体路径;

[root@master kube_config]# kubectl get pods --kubeconfig=/home/kube_user1/.kube/config 
Error from server (Forbidden): pods is forbidden: User "kube_user1" cannot list resource "pods" in API group "" in the namespace "default"

tls bootstraping机制

​ 此前说过,k8s集群通信几乎都需要双向tls认证,然后加密通信,新节点加入集群时也不例外。

​ 给新节点做tls配置有2种方式:1、手动给每个节点,配置好私钥、证书,然后分发到各个节点;2、由各个节点自行生成私钥和签名证书;

​ 集群规模大时,1方法显然任务繁重、2方法又存在安全隐患,因此采用较为折中的办法:tls bootstraping;

​ 新节点加入,由其上kubelet自动生成私钥,和证书请求文件csr,并发送给master进行审核,管理员在master上审核通过后,再进行证书签发;但逐一审核csr文件仍较为繁重,因此引入了token

​ 新节点只有采用了master上生成的,认可的相同的token,就可以在发送csr请求后,自动由controller-manager中的一个证书签发控制器自动签发,避免了手动审核的繁琐,且token有过期时长,持有该token的用户会加入到system:bootstrappers组内;且具有适当的rbac权限,用于证书签发;

​ kubeadm部署集群时,kubectl join 就需要指定一个token参数,才可加入集群,就是该token

​ api-server启动参数--client-ca-file,要和证书签发控制器使用的是一个ca,这样才能通过认证;

RBAC:role-based-access-control

​ rbac是一种灵活授权的权限授予机制,不同于以往,直接把权限授予用户,**而是在权限和用户之间加入一层:角色role,**角色由;操作verb、对象object构成;即对哪些对象能执行哪些动作,多组操作和对象构成了role角色,角色可以再赋予用户,用户组,用户和角色之间是多对多的关系,

​ 定义好角色后,以后新创建的用户只需关联到角色,就立刻拥有了角色的所有的权限,而不比给新用户一一设置对哪些对象有哪些操作权限,提高了灵活性;

RBAC授权插件

​ rbac授权插件,将定义好的,包含对某些对象操作权限的role关联到主体;

  • 主体:useraccount或serviceaccount,以及group
  • 动作:create、delete、update、get
  • 对象:k8s上各种资源对象
  • role定义时:就由动作和对象组成,然后关联给主体,即user账户或sa账户

k8s与rbac相关的资源对象

  • clusterrole,集群级资源,用于定义对集群级资源如node的操作集合,
  • role,名称空间级资源,用于定义对ns级资源如pod的操作集合,
  • clusterrolebinding,引用集群级别的clusterrole
  • rolebinding,可以引用本ns里的role,也可引用集群级别的clusterrole
    • Alternatively, a RoleBinding can reference a ClusterRole and bind that ClusterRole to the namespace of the RoleBinding.
  • 注:user为集群级,sa为ns级别,

image-20201124104357284

role和rolebinding

  • role定义了对象(如pod,service)和所允许的操作(如增删改查)
  • rolebing 可以将定义好的role绑定给user、group、sa三种账户(sa是ns级别)
  • role和rolebinding属于ns级别资源
  • rolebinding只可以引用本ns中的role,以及定义的集群级别的clusterrole
  • role和rolebinding均可通过yaml清单或命令行创建

1、定义role清单

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
[root@client role]# cat role-get-pod.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
 name: role-get-pod
rules:
- apiGroups: [""]
  resources: ["pods","pods/log"]
  verbs: ["get","list","watch"]

# 在rules中定义了最必须的2个字段,resources定义对象列表;verbs定义可执行的动作列表
# apiGroups指定了资源所在群组,也是个列表,“”代表api核心群组


[root@client role]# kubectl apply -f role-get-pod.yaml 
role.rbac.authorization.k8s.io/role-get-pod created
[root@client role]# kubectl get role
NAME           AGE
role-get-pod   2s
[root@client role]# kubectl get role role-get-pod -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
            {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"name":"role-get-pod","namespace":"default"},"rules":[{"apiGroups":[""],"resources":["pods","pods/log"],"verbs":["get","list","watch"]}]}
  creationTimestamp: 2020-11-24T03:20:38Z
  name: role-get-pod
  namespace: default
  resourceVersion: "560845"
  selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/default/roles/role-get-pod
  uid: 05fd5f74-2e04-11eb-8b3f-000c292d5d7c
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - pods/log
  verbs:
  - get
  - list
  - watch

2、定义rolebinding清单

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
[root@client role]# cat rolebinding-get-pod.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
 name: rolebinding-get-pod
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: Role
 name: role-get-pod
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: kube_user1

3、绑定给上步创建的kube_user1这个user用户

​ 绑定前:

1
2
[root@master ~]# kubectl get pods --kubeconfig=/home/kube_user1/.kube/config 
Error from server (Forbidden): pods is forbidden: User "kube_user1" cannot list resource "pods" in API group "" in the namespace "default"

​ 绑定:

1
2
3
4
5
6
[root@client role]# kubectl apply -f rolebinding-get-pod.yaml 
rolebinding.rbac.authorization.k8s.io/rolebinding-get-pod created

[root@client role]# kubectl get rolebinding
NAME                  AGE
rolebinding-get-pod   4s

​ 绑定后:根据绑定的role权限,可以查看pod了

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
[kube_user1@master ~]$ kubectl get pods
NAME                    READY   STATUS              RESTARTS   AGE
dep1-7b96746498-2qv6t   1/1     Running             6          5d16h
dep1-7b96746498-59x58   1/1     Running             6          5d16h
dep1-7b96746498-8lldj   1/1     Running             6          5d16h
nginx-with-ssl          1/1     Running             2          45h
ngx1                    1/1     Running             2          46h
pod-env                 0/1     ImagePullBackOff    2          4d18h
pod1                    0/1     ImagePullBackOff    2          4d20h
pod2                    0/1     ImagePullBackOff    2          4d20h
state-stateset-0        0/1     ContainerCreating   0          44h

[kube_user1@master ~]$ kubectl get service
Error from server (Forbidden): services is forbidden: User "kube_user1" cannot list resource "services" in API group "" in the namespace "default"

4、role定义字段说明:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
kubectl explain role.rules
   apiGroups	<[]string>
     APIGroups is the name of the APIGroup that contains the resources. If
     multiple API groups are specified, any action requested against one of the
     enumerated resources in any API group will be allowed.

   nonResourceURLs	<[]string>
   # 用于定义一些非k8s对象的url型,如/healthz
     NonResourceURLs is a set of partial urls that a user should have access to.
     *s are allowed, but only as the full, final step in the path Since
     non-resource URLs are not namespaced, this field is only applicable for
     ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply
     to API resources (such as "pods" or "secrets") or non-resource URL paths
     (such as "/api"), but not both.

   resourceNames	<[]string>
     ResourceNames is an optional white list of names that the rule applies to.
     An empty set means that everything is allowed.

   resources	<[]string>
     Resources is a list of resources this rule applies to. ResourceAll
     represents all resources.

   verbs	<[]string> -required-
     Verbs is a list of Verbs that apply to ALL the ResourceKinds and
     AttributeRestrictions contained in this rule. VerbAll represents all kinds
     
     
role下级只有4个字段,分别为apiVersion kind metadata rules,其中在rules中定义了对哪些对象;能够执行哪些动作
对象一般为k8s的资源对象,如pod service
以及某些对象的子对象,如pod/logs nodes/status
以及一些非k8s对象的url型,如/healthz

5、rolebinding定义字段说明:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@client ~]# kubectl explain rolebinding
KIND:     RoleBinding
VERSION:  rbac.authorization.k8s.io/v1

DESCRIPTION:
     RoleBinding references a role, but does not contain it. It can reference a
     Role in the same namespace or a ClusterRole in the global namespace. It
     adds who information via Subjects and namespace information by which
     namespace it exists in. RoleBindings in a given namespace only have effect
     in that namespace.
...

   roleRef	<Object> -required-
     RoleRef can reference a Role in the current namespace or a ClusterRole in
     the global namespace. If the RoleRef cannot be resolved, the Authorizer
     must return an error.

   subjects	<[]Object>
     Subjects holds references to the objects the role applies to.

# 最基础的sujects定义关联的用户,有三类:user group serviceaccount
# roleRef指定了关联到哪些角色,类型有2类:role 和 clusterrole

clusterrole和clusterrolebinding

​ clusterrole用于定义对集群级别资源的操作权限,如node,ns等资源,格式和role类似;

​ clusterrolebinding可以将clusterrole绑定给用户,从而使得其具有对集群级别资源的操作权限;

​ rolebinding对象属于ns级别,它绑定clusterrole的时候,**所授予用户的权限作用范围只限于rolebinding所在的名称空间内;**eg:若某clusterrole定义了权限,可以访问所有名称空间的所有configmap资源,该clusterrole通过test名称空间的rolebinding被绑定到了用户user1上,那user1所能访问的只有test名称空间的所有configmap,若采用的是clusterrolebinding,则user可访问所有名称空间的所有configmap;

  • 集群级资源:如node,pv无法被rolebinding绑定,只能被clusterrolebinding绑定;

1、系统内置了对非k8s对象的url型的访问权限

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
[root@client role]# kubectl get clusterrole system:discovery -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  creationTimestamp: 2020-11-10T06:50:24Z
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:discovery
  resourceVersion: "55"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/system%3Adiscovery
  uid: 02162b27-2321-11eb-8d73-000c292d5d7c
rules:
- nonResourceURLs:
  - /api
  - /api/*
  - /apis
  - /apis/*
  - /healthz
  - /openapi
  - /openapi/*
  - /swagger-2.0.0.pb-v1
  - /swagger.json
  - /swaggerapi
  - /swaggerapi/*
  - /version
  - /version/
  verbs:
  - get
[root@client role]# kubectl get clusterrolebinding system:discovery -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  creationTimestamp: 2020-11-10T06:50:24Z
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:discovery
  resourceVersion: "111"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system%3Adiscovery
  uid: 024781cd-2321-11eb-8d73-000c292d5d7c
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:discovery
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:authenticated
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:unauthenticated

clusterrole和clusterrolebinding都叫 system:discovery
且授权了2个组, system:authenticated、system:unauthenticated
即包含了所有用户,任何用户都可以获得这些url资源的读取权限;

2、自定义对url型资源访问的clusterrole

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
 name: healthz-admin
rules:
- nonResourceURLs:
  - /healthz
  verbs
  - get
  - create

聚合型clusterrole

​ 聚合型clusterrole实现了灵活的组合其他多个clusterrole,(通过标签选择器实现)

​ k8s自1.9后,支持在ClusterRole.aggregationRule字段中聚合其他的clusterrole,即本身不定义权限,而是将多个clusterrole的权限继承下来并聚合为自己的权限,通过clusterRoleSelecters标签选择器实现

k8s内建的clusterrole:admin和edit就是聚合型role,用于可以根据需要定义自己的clusterrole,并被admin或edit的标签选择器选中,可以方便的向默认角色中组合添加自定义的权限集合;

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
[root@client ~]# kubectl get clusterrole admin -o yaml
aggregationRule:
  clusterRoleSelectors:
  - matchLabels:
      rbac.authorization.k8s.io/aggregate-to-admin: "true"

---
[root@client ~]# kubectl get clusterrole edit -o yaml
aggregationRule:
  clusterRoleSelectors:
  - matchLabels:
      rbac.authorization.k8s.io/aggregate-to-edit: "true"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole

1、定义聚合型clusterrole

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
[root@client role]# cat aggre-clusterrole.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
 name: aggret-clusterrole
aggregationRule:
 clusterRoleSelectors:
 - matchLabels:
    rbac.demo.com/aggregate: "true"
rules: []

2、定义带标签可被聚合的clusterrole

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
[root@client role]# cat demo.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
 name: demo
 labels: 
  rbac.demo.com/aggregate: "true"
rules:
- apiGroups: [""]
  resources:
  - pods
  verbs:
  - get
  - watch

3、查看权限是否被聚合

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[root@client role]# kubectl apply -f demo.yaml 
clusterrole.rbac.authorization.k8s.io/demo created
[root@client role]# kubectl get clusterrole aggret-clusterrole -o yaml
aggregationRule:
  clusterRoleSelectors:
  - matchLabels:
      rbac.demo.com/aggregate: "true"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
            {"aggregationRule":{"clusterRoleSelectors":[{"matchLabels":{"rbac.demo.com/aggregate":"true"}}]},"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"aggret-clusterrole"},"rules":[]}
  creationTimestamp: 2020-11-24T08:34:50Z
  name: aggret-clusterrole
  resourceVersion: "580449"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/aggret-clusterrole
  uid: eaf6b98b-2e2f-11eb-8b3f-000c292d5d7c
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - watch

内建clusterrole

​ k8s内建了一组clusterrole和clusterbinding,供系统各个组件使用,多以system:开头;

1、示例:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
[root@client role]# kubectl get clusterrole
NAME                                                                   AGE
admin                                                                  14d
aggret-clusterrole                                                     11m
cluster-admin                                                          14d
demo                                                                   4m50s
edit                                                                   14d
flannel                                                                13d
ingress-nginx                                                          6d2h
ingress-nginx-admission                                                6d2h
system:aggregate-to-admin                                              14d
system:aggregate-to-edit                                               14d
system:aggregate-to-view                                               14d
system:auth-delegator                                                  14d
system:aws-cloud-provider                                              14d
system:basic-user 

2、常用的clusterole

  • cluster-admin,集群超级管理员角色;
    • 被同名clusterrolebinding绑定到了system:masters组,所有属于该组的用户,都将是集群超级管理员;
    • kubeadm部署的集群,管理员/O=system:masters/CN=kubernetes-admin就属于该组
    • 配置超级管理员方法:
      • 创建用户绑定到cluster-admin集群角色
      • 创建用户绑定到system:masters组,设置O=system:master
  • system:kube-scheduler
  • system:kube-controller-manage
  • system:node
  • ...

k8s dashboard

​ dashborad为k8s集群的附加组件,可以以图形化方式查看,管理k8s集群,但其本质就是图形化的代理,是api-server的前端,真正处理请求的还是api-server

​ dashboard也是api-server的客户端,认证到api-server的方式有2种:kubeconfig、token,也需要k8s的CA给其颁发证书

部署https的dashboard

  1. 创建dashboard的私钥、证书

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    
    [root@client dashboard]# (umask 077; openssl genrsa -out dashboard.key 2048)
    Generating RSA private key, 2048 bit long modulus
    .......................+++
    ..............................................+++
    e is 65537 (0x10001)
    [root@client dashboard]# ll
    total 4
    -rw------- 1 root root 1675 Nov 24 17:56 dashboard.key
    # 生成key
       
    [root@client dashboard]# openssl req -new -key dashboard.key -out dashboard.csr -subj "/O=system:masters/CN=dashboard"
    [root@client dashboard]# ll
    total 8
    -rw-r--r-- 1 root root  924 Nov 24 17:57 dashboard.csr
    -rw------- 1 root root 1675 Nov 24 17:56 dashboard.key
    # 生成csr
       
    [root@master ~]# openssl x509 -req -in dashboard.csr -CA /etc/kubernetes/pki/ca.crt  -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out dashboard.crt -days 365
    Signature ok
    subject=/O=system:masters/CN=dashboard
    Getting CA Private Key
    # 签名,
    
  2. 将上步的私钥、证书,创建为secret

    [root@master ~]# kubectl create ns !$
    kubectl create ns kubernetes-dashboard
    namespace/kubernetes-dashboard created
       
    [root@master ~]# kubectl create secret generic kubernetes-dashboard-certs --from-file=$HOME/certs -n kubernetes-dashboard
    secret/kubernetes-dashboard-certs created
    # 创建名称空间,并在其中创建secret,引用上步创建的私钥和证书
    
  3. 利用在线清单部署pod化的dashboard

    https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml
    # yaml文件链接
    # 应用前修改:
    # 修改容器启动参数,加上刚刚创建的key和证书路径
    # 修改service类型为Nodeport
       
    [root@master ~]kubectl apply -f recommended.yaml 
    
  4. 查看nodeport地址

    [root@master ~]# kubectl get svc -n kubernetes-dashboard
    NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
    dashboard-metrics-scraper   ClusterIP   10.97.49.122   <none>        8000/TCP        9m57s
    kubernetes-dashboard        NodePort    10.105.77.10   <none>        443:30717/TCP   9m58s
    
  5. 外部浏览器访问

image-20201124183239267

ps:

dashboard运行为pod,其连接api-server用的账户就是serviceaccount类型,登陆dashboard界面用的也是该sa账户,登陆后权限,能看到的东西多少,就取决于该sa的权限,

​ sa可以绑定不同的role来获得不同的权限,如内建的cluster-admin获得超级管理员权限;

配置token认证

  1. 创建sa账户dash-admin

    [root@master ~]# kubectl create sa dash-admin -n kube-system
    serviceaccount/dash-admin created
       
    
  2. 绑定cluster-admin角色给sa账户dash-admin

    [root@master ~]# kubectl create clusterrolebinding dash-admin --clusterrole=cluster-admin \
    > --serviceaccount=kube-system:dash-admin
    clusterrolebinding.rbac.authorization.k8s.io/dash-admin created
    
  3. 获取sa账户dash-admin对应的secrts

    root@master ~]# kubectl get secret -n kube-system |grep dash-admin
    dash-admin-token-wbdpl                           kubernetes.io/service-account-token   3      5m22s
    
  4. 查看secret中的token

    [root@master ~]# kubectl describe secrets dash-admin-token-wbdpl -n kube-system 
    Name:         dash-admin-token-wbdpl
    Namespace:    kube-system
    Labels:       <none>
    Annotations:  kubernetes.io/service-account.name: dash-admin
                  kubernetes.io/service-account.uid: aefbae96-2e41-11eb-8b3f-000c292d5d7c
       
    Type:  kubernetes.io/service-account-token
       
    Data
    ====
    ca.crt:     1025 bytes
    namespace:  11 bytes
    token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoLWFkbWluLXRva2VuLXdiZHBsIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNXJ2aWNlLWFjY291bnQubmFtZSI6ImRhc2gtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJhZWZiYWU5Ni0yZTQxLTExZWItOGIzZi0wMDBjMjkyZDVkN2MiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06ZGFzaC1hZG1pbiJ9.IzI5BD6304jg7Ssq-Fq1DjcS0OWuaqjmuLLgGfib-KhHcOAGrEN8-alAZC_maIEqjFHjlXNQfxZ4ihh09zsEv2CBL7J6heFYX_ZAHtNhdoxpsO_
    
  5. 输入token

image-20201124184530312

配置kubeconfig认证

​ 相比于token,kubeconfig文件更易于保存、分发:

  1. set-cluster写入集群信息

  2. set-credentials写入sa账户的token信息

    1. user用户的set-credentials是写入用户的:证书和密钥;
    2. sa用户的set-credentials是写入sa关联的secret的token信息:写入的是secret中的token字符串
  3. set-context设置上下文

  4. use-context设置默认使用的上下文

  5. 新创建一个sa账户

    [root@master ~]# kubectl create sa default-admin -n default
    serviceaccount/default-admin created
    [root@master ~]# kubectl create clusterrolebinding default-admin --clusterrole=admin \
    > --serviceaccount=default:default-admin
    clusterrolebinding.rbac.authorization.k8s.io/default-admin created
    
  6. set-cluster写入集群信息

    [root@master ~]# kubectl config set-cluster kubernetes --embed-certs=true --server="https://192.168.80.101:6443" --certificate-authority=/etc/kubernetes/pki/ca.crt --kubeconfig=./defautl-admin.kubeconfig
    Cluster "kubernetes" set.
    
  7. set-credentials写入sa账户的token信息

    [root@master ~]# kubectl get secret |grep default-admin
    default-admin-token-g64d2   kubernetes.io/service-account-token   3      4m31s
       
       
       
    ~]# token=$(kubectl get secret default-admin-token-g64d2 -o jsonpath={.data.token} |base64 -d)
       
       
       
       
    [root@master ~]# kubectl config set-credentials default-admin --token=${token} --kubeconfig=./defautl-admin.kubeconfig 
    User "default-admin" set.
       
    
  8. set-context设置上下文

    [root@master ~]# kubectl config set-context default-admin --cluster=kubernetes --user=default-admin --kubeconfig=./defautl-admin.kubeconfig 
    Context "default-admin" created.
       
    
  9. use-context设置默认使用的上下文

    [root@master ~]# kubectl config use-context default-admin --kubeconfig=./defautl-admin.kubeconfig 
    Switched to context "default-admin".
       
    
  10. 将该文件发送到客户端主机上,登陆时选择kubeconfig并选中此文件即可登陆,截图略;

准入控制器

准入控制器作用:

经过了身份认证、权限鉴权、后,准入控制器会对写操作做拦截,并经过准入控制器检查,作用有:设置缺失字段为默认值;限制容器镜像必须来自某仓库;检查pod的资源需求是否超过限制;

limitRange资源准入控制

​ 定义pod中容器时,用户可以通过资源request和资源limits定义来限制容器可以使用的资源范围,

​ 但对于未定义,或忘记定义资源限制的容器,就有可能无限不断的消耗节点所在资源,造成影响,

因此,引入limitrange对象,对名称空间级别的对象,作用是:在每一个名称空间,设置一个limitrange对象,其可以定义运行在该ns中,每个容器所能使用的资源的下限、上限、未定义时的默认值!

​ limitrange可以限制的资源有cpu、内存、存储,其中cpu,存储针对的是pod和其中的容器;存储主要针对的是pvc

​ 定义了limitrange之后,提交的创建对象的api请求,经过limitrange准入控制器检查,若其中定义的资源需求不匹配limitrange中定义的范围,就会被决绝,提交失败!

1、语法:

[root@master ~]# kubectl explain limitrange
KIND:     LimitRange
VERSION:  v1

DESCRIPTION:
     LimitRange sets resource usage limits for each kind of resource in a
     Namespace.
...

2、创建limitrange

​ 定义了容器对,cpu需求的最大,最小,默认值

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@client limitrange]# vim limitrange1.yaml
[root@client limitrange]# kubectl apply -f limitrange1.yaml 
limitrange/cpu-limit created
[root@client limitrange]# kubectl get limitrange -o wide
NAME        CREATED AT
cpu-limit   2020-11-24T11:32:22Z
[root@client limitrange]# cat limitrange1.yaml 
apiVersion: v1
kind: LimitRange
metadata:
 name: cpu-limit
spec:
 limits:
 - default:
    cpu: 1000m
   defaultRequest:
    cpu: 1000m
   min:
    cpu: 500m
   max:
    cpu: 2000m
   maxLimitRequestRatio:
    cpu: 4
   type: Container

3、创建容器,未定义资源需求,查看其默认值是否是limitrange中定义

[root@client limitrange]# kubectl run limit-pod1 --image=ikubernetes/myapp:v1 \
> --restart=Never
pod/limit-pod1 created

# 没定义资源需求的容器,自动继承了该名称空间下limitrange的设置
[root@client limitrange]# kubectl get pods limit-pod1 -o yaml
...
  containers:
  - image: ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
    name: limit-pod1
    resources:
      limits:
        cpu: "1"
      requests:
        cpu: "1"

4、创建容器,资源需求大于最大值

1
2
3
[root@client limitrange]# kubectl run limit-pod2 --image=ikubernetes/myapp:v1 --restart=Never --requests='cpu=400m'
Error from server (Forbidden): pods "limit-pod2" is forbidden: minimum cpu usage per Container is 500m, but request is 400m.

5、创建容器,资源需求小于最小值

1
2
[root@client limitrange]# kubectl run limit-pod3 --image=ikubernetes/myapp:v1 --restart=Never --requests='cpu=3000m'
The Pod "limit-pod3" is invalid: spec.containers[0].resources.requests: Invalid value: "3": must be less than or equal to cpu limit

可以看到,定义pod时,给容器设置的资源需求,无论是大于还是小于limitrange中设置的数值,都会在经由limitrange准入控制器检查时,检查不通过,而被拒绝创建pod;

resourceQuota资源准入控制

​ limitrange只能设置单个pod中容器所能使用的上下限,但某名称空间的用户却可以通过堆积数量的方式来提高该名称空间整体的资源占用量;一个ns对应于一个租户情况下,必须要有机制能限制该租户的资源使用,于是引入了resourceQuota对象;可以设置某名称空间下,所能使用的资源总量的上下限,以及各类资源对象的数量;

作用:

  • resourceQuota设置在名称空间级别
  • 限制该ns中各类对象所能创建的数量,如pod总数,servive总数
  • 限制该ns中所有pod能使用的cpu上下限,内存上下限,pvc使用数,pv使用总存储等
  • 定义了resourceQouta对象的名称空间,向其提交的pod,其中容器必须定义resource字段,或有limitrange可以继承默认值;

多租户环境中,一个ns名称空间一般对应一个租户或一个项目,多租户的各个维度都需要在一个集群里实现隔离,避免干扰

  • namespace实现名称的隔离,即不同ns中可以创建相同名称的相同类型资源,而互不影响;
  • cni和网络插件实现网络的隔离,不同租户之间网络隔离借助cni和网络插件实现;
  • limitrange设置在ns名称空间级别,为该空间的所有pod设置默认的资源限制、下限、上限;
  • resourceQuota设置在ns名称空间级别,设置了该空间的所有资源能使用的资源之和的上限,下限,以及各类资源所能创建的数量,如最多创建100个deployment控制器

1、语法:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
[root@client ~]# kubectl explain resourceQuota
KIND:     ResourceQuota
VERSION:  v1

DESCRIPTION:
     ResourceQuota sets aggregate quota restrictions enforced per namespace

---
[root@client ~]# kubectl explain resourceQuota.spec
KIND:     ResourceQuota
VERSION:  v1

RESOURCE: spec <Object>

DESCRIPTION:
     Spec defines the desired quota.
     https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status

     ResourceQuotaSpec defines the desired hard limits to enforce for Quota.

2、常用定义字段:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
cpu|requests.cpu

memory|requests.memory

limits.cpu

limits.memory

requests.storage 所有pvc需要的存储总量

persistentvolumeclaims 可以创建的pvc总数

---
对象数量限制

count/deployment.apps
count/services

3、创建resourceQuota对象

​ yaml定义:定义了cpu和内存的上限为2核,2G,deployment控制器最多1个

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
[root@client resourcesQuota]# cat demo-ns-resourceQuota.yaml 
apiVersion: v1
kind: ResourceQuota
metadata:
 name: demo-quota
 namespace: demo
spec:
 hard:
  pods: 3
  requests.cpu: "1"
  requests.memory: 1Gi
  limits.cpu: "2"
  limits.memory: 2Gi
  count/deployments.apps: "1"
  count/deployments.extensions: "1"
  persistentvolumeclaims: "2"

​ 查看:

[root@client resourcesQuota]# kubectl create ns demo
namespace/demo created


[root@client resourcesQuota]# kubectl get resourceQuota -n demo
NAME         CREATED AT
demo-quota   2020-11-25T07:56:17Z

​ 查看现在resourceQuota状态,使用部分还为0

[root@client resourcesQuota]# kubectl describe resourceQuota -n demo 
Name:                         demo-quota
Namespace:                    demo
Resource                      Used  Hard
--------                      ----  ----
count/deployments.apps        0     1
count/deployments.extensions  0     1
limits.cpu                    0     2
limits.memory                 0     2Gi
persistentvolumeclaims        0     2
pods                          0     3
requests.cpu                  0     1
requests.memory               0     1Gi

​ 创建一个的deployment控制器:看到,使用部分已经有值,为副本数2个pod之和,

[root@client resourcesQuota]# kubectl run demo-dep1 --image=ikubernetes/myapp:v1 --replicas=2 --namespace=demo --requests='cpu=200m,memory=256Mi' --limits='cpu=500m,memory=528Mi'
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/demo-dep1 created

---
[root@client resourcesQuota]# kubectl describe resourceQuota -n demo 
Name:                         demo-quota
Namespace:                    demo
Resource                      Used    Hard
--------                      ----    ----
count/deployments.apps        1       1
count/deployments.extensions  0       1
limits.cpu                    1       2
limits.memory                 1056Mi  2Gi
persistentvolumeclaims        0       2
pods                          2       3
requests.cpu                  400m    1
requests.memory               512Mi   1Gi

​ 扩容pod,最多也只能扩容到3个,为Quota中定义的上限;

[root@client resourcesQuota]# kubectl scale deploy demo-dep1 -n demo --replicas=4
deployment.extensions/demo-dep1 scaled
[root@client resourcesQuota]# kubectl get pods -n demo
NAME                        READY   STATUS    RESTARTS   AGE
demo-dep1-894bc78f5-mvzgg   1/1     Running   0          3m17s
demo-dep1-894bc78f5-mxmp6   1/1     Running   0          11s
demo-dep1-894bc78f5-p9x8r   1/1     Running   0          3m17s

podSecurityPolicy

​ podSecurityPolicy简称psp,为集群级别资源:是默认未启用的准入控制插件,因启动后,但没定义psp对象的情况,会禁止在集群中创建任务pod对象

​ 介绍:https://kubernetes.io/docs/concepts/policy/pod-security-policy/

语法:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
[root@client resourcesQuota]# kubectl explain podSecurityPolicy.spec
KIND:     PodSecurityPolicy
VERSION:  extensions/v1beta1

RESOURCE: spec <Object>

DESCRIPTION:
     spec defines the policy enforced.

     PodSecurityPolicySpec defines the policy enforced. Deprecated: use
     PodSecurityPolicySpec from policy API Group instead.

作用:用于检查用户是否有提交创建特权pod的权限

使用方法:

  1. 根据需要定义psp对象,
  2. 在api-server中启用psp,(默认禁用)
  3. 定义role或clusterrole,在角色中引用需要的psp对象
  4. 采用rolebinding或clusterrolebinding将引用了psp对象的角色关联到user或sa账户
  5. 之后,user或sa就拥有了对应psp对象中定义的权限,
  6. user或sa提交创建pod时,会经过psp控制器检查,若申请创建的是具有特权的pod(能使用宿主机节点网络名称空间、ipc、能使用什么卷类型、能使用的端口范围、等即为特权pod),就检查其引用的psp对象有无对应的特权权限定义,有就通过,否则就失败

小结:

  • api-server上:用户身份认证、用户鉴权、用户写请求准入控制检查三步的大体工作流程和各自的功用;
  • k8s两类账户:人类用户user(group)、pod进程用户serviceaccount
    • 身份认证:user配置身份信息时,需要有私钥,ca颁发证书并注入到客户端配置文件kubeconfig中;
    • 身份认证:sa配置身份信息时,需要先创建sa账户,系统随机自动为其创建同名secret对象,其包含了token信息用于身份验证,可注入到客户端配置kubeconfig文件中;
    • 权限赋予:【无论是user(group)还是sa用户,赋予权限都需要借助rolebinding或clusterrolebinding将用户账户绑定到某个已经存在的角色上,角色为操作动作和作用对象的集合,用户为操作主体】
  • k8s集群之间各组件,安全通信的实现:客户端和服务端的双向tls认证、加密通信;
  • tls bootstraping机制和流程
  • RBAC授权机制:
    • rbac为k8s默认的授权机制,由第二步的:用户鉴权部分的rbac插件实现
    • role和clusterrole
    • rolebinding和clusterrolebinding
    • 聚合型clusterrole,通过将其他多个clusterrole灵活组合的方式,实现灵活定义clusterrole
    • 系统内建的clusterrole,供各个组件使用,或供tls bootstraping期间使用
  • k8s dashboard
    • pod运行的dashborad的清单部署过程
    • 因为dashborad是pod,其身份类型为sa类型,验证其身份就是用关联到sa账户中的secrt对象中的token,而不是user的私钥和证书的方式
    • 登陆dashborad的2种方式:token或打入了token的kubeconfig配置文件(本质是一样的)
  • 准入控制器:检查流程的最后一步,
    • 只对写请求的进行拦截检查,有:补全默认字段信息、检查单个pod的资源限制、检查请求创建资源所在的ns的总量资源限制等作用;
    • 准入控制器由多个插件实现,串行检查,不像前2步,需要所有准入控制器都通过才可以最终通过
updatedupdated2020-12-012020-12-01
加载评论