K8s ingress: контроллер входа nginx не в рабочем режиме

У меня есть изображение Дженкинса, я сделал сервис как NodeType. Это работает хорошо. Поскольку я добавлю больше сервисов, мне нужно использовать ingress nginx для перенаправления трафика на разные виды сервисов.

На данный момент я использую свою win10 для установки двух виртуальных машин (Centos 7.5). Одна виртуальная машина в качестве master1, имеет два внутренних IPv4-адреса (10.0.2.9 and 192.168.56.103) и одну виртуальную машину в качестве рабочего node4 (10.0.2.6 and 192.168.56.104).

Все изображения локальные. Я загрузил в локальный репозиторий изображений докеров. Проблема в том, что Nginx ingress не запускается.

Моя конфигурация следующая:

вход-nginx-ctl.yaml:

apiVersion: extensions/v1beta1
metadata:
  name: ingress-nginx
  namespace: default
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: ingress-nginx
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0
        name: ingress-nginx
        imagePullPolicy: Never
        ports:
          - name: http
            containerPort: 80
            protocol: TCP
          - name: https
            containerPort: 443
            protocol: TCP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend

вход-nginx-res.yaml:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress
  namespace: default
spec:
  rules:
  - host:
    http:
      paths:
      - path: /
        backend:
          serviceName: shinyinfo-jenkins-svc
          servicePort: 8080

nginx-default-backend.yaml

kind: Service
apiVersion: v1
metadata:
  name: nginx-default-backend
  namespace: default
spec:
  ports:
  - port: 80
    targetPort: http
  selector:
    app: nginx-default-backend
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: nginx-default-backend
  namespace: default
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx-default-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        image: chenliujin/defaultbackend
        imagePullPolicy: Never
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        resources:
          limits:
            cpu: 10m
            memory: 10Mi
          requests:
            cpu: 10m
            memory: 10Mi
        ports:
        - name: http
          containerPort: 8080
          protocol: TCP

shinyinfo-jenkins-pod.yaml

apiVersion: v1
kind: Pod
metadata:
 name: shinyinfo-jenkins
 labels:
   app: shinyinfo-jenkins
spec:
 containers:
   - name: shinyinfo-jenkins
     image: shinyinfo_jenkins
     imagePullPolicy: Never
     ports:
       - containerPort: 8080
         containerPort: 50000
     volumeMounts:
     - mountPath: /devops/password
       name: jenkins-password
     - mountPath: /var/jenkins_home
       name: jenkins-home
 volumes:
   - name: jenkins-password
     hostPath:
       path: /jenkins/password
   - name: jenkins-home
     hostPath:
       path: /jenkins

shinyinfo-jenkins-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: shinyinfo-jenkins-svc
  labels:
    name: shinyinfo-jenkins-svc
spec:
  selector:
    app: shinyinfo-jenkins
  type: NodePort
  ports:
  - name: tcp
    port: 8080
    nodePort: 30003

Что-то не так с входом nginx, консольный вывод выглядит следующим образом:

[master@master1 config]$ sudo kubectl apply -f ingress-nginx-ctl.yaml
service/ingress-nginx created
deployment.extensions/ingress-nginx created

[master@master1 config]$ sudo kubectl apply -f ingress-nginx-res.yaml
ingress.extensions/my-ingress created

Изображения - это CrashLoopBackOff, почему ???

[master@master1 config]$ sudo kubectl get po
NAME                                     READY     STATUS             RESTARTS   AGE
ingress-nginx-66df6b6d9-mhmj9            0/1       CrashLoopBackOff   1          9s
nginx-default-backend-645546c46f-x7s84   1/1       Running            0          6m
shinyinfo-jenkins                        1/1       Running            0          20m

описать стручок:

[master@master1 config]$ sudo kubectl describe po ingress-nginx-66df6b6d9-mhmj9
Name:               ingress-nginx-66df6b6d9-mhmj9
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               node4/192.168.56.104
Start Time:         Thu, 08 Nov 2018 16:45:46 +0800
Labels:             app=ingress-nginx
                    pod-template-hash=228926285
Annotations:        <none>
Status:             Running
IP:                 100.127.10.211
Controlled By:      ReplicaSet/ingress-nginx-66df6b6d9
Containers:
  ingress-nginx:
    Container ID:  docker://2aba164d116758585abef9d893a5fa0f0c5e23c04a13466263ce357ebe10cb0a
    Image:         quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0
    Image ID:      docker://sha256:a3f21ec4bd119e7e17c8c8b2bf8a3b9e42a8607455826cd1fa0b5461045d2fa9
    Ports:         80/TCP, 443/TCP
    Host Ports:    0/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Thu, 08 Nov 2018 16:46:09 +0800
      Finished:     Thu, 08 Nov 2018 16:46:09 +0800
    Ready:          False
    Restart Count:  2
    Liveness:       http-get http://:10254/healthz delay=30s timeout=5s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-66df6b6d9-mhmj9 (v1:metadata.name)
      POD_NAMESPACE:  default (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-24hnm (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-24hnm:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-24hnm
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  40s                default-scheduler  Successfully assigned default/ingress-nginx-66df6b6d9-mhmj9 to node4
  Normal   Pulled     18s (x3 over 39s)  kubelet, node4     Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0" already present on machine
  Normal   Created    18s (x3 over 39s)  kubelet, node4     Created container
  Normal   Started    17s (x3 over 39s)  kubelet, node4     Started container
  Warning  BackOff    11s (x5 over 36s)  kubelet, node4     Back-off restarting failed container

журналы стручка:

[master@master1 config]$ sudo kubectl logs ingress-nginx-66df6b6d9-mhmj9
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:    0.20.0
  Build:      git-e8d8103
  Repository: https://github.com/kubernetes/ingress-nginx.git
-------------------------------------------------------------------------------
nginx version: nginx/1.15.5
W1108 08:47:16.081042       6 client_config.go:552] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I1108 08:47:16.081234       6 main.go:196] Creating API client for https://10.96.0.1:443
I1108 08:47:16.122315       6 main.go:240] Running in Kubernetes cluster version v1.11 (v1.11.3) - git (clean) commit a4529464e4629c21224b3d52edfe0ea91b072862 - platform linux/amd64
F1108 08:47:16.123661       6 main.go:97] ✖ The cluster seems to be running with a restrictive Authorization mode and the Ingress controller does not have the required permissions to operate normally.

Могут ли здесь эксперты намекнуть?


person user84592    schedule 08.11.2018    source источник


Ответы (1)


Вам необходимо настроить ingress-nginx для использования отдельной учетной записи службы и предоставить необходимые привилегии учетной записи службы.

вот пример:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: lb
  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-normal
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
        - events
    verbs:
        - create
        - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-minimal
  namespace: kube-system
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      - "ingress-controller-leader-dev"
      - "ingress-controller-leader-prod"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-minimal
subjects:
  - kind: ServiceAccount
    name: lb
    namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-normal
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-normal
subjects:
  - kind: ServiceAccount
    name: lb
    namespace: kube-system
person Kun Li    schedule 08.11.2018
comment
Да, я изменил пространство имен по умолчанию. И поместите serviceAccount: lb в spec в ingress-nginx-ctl.yaml. Теперь, по крайней мере, капсула находится в рабочем состоянии. - person user84592; 09.11.2018
comment
Также необходимо добавить аннотации: nginx.ingress.kubernetes.io/ssl-redirect: false в файл ingress-nginx-res.yaml. - person user84592; 09.11.2018