본문 바로가기

클라우드/쿠버네티스(Kubernetes)

쿠버네티스 Kubernetes 잡 (Job)

앞의 것들은 파드에 있는 어플리케이션을 계속적으로 실행하기 위한 형태

레이블셀렉터 지정

프로그램을 실행해서 언젠가는 끝나는 작업

apache -> 지속적으로 작업 

데이터베이스 백업 -> 계속하는거 아니다 백업이 완료되면 끝나야한다

임시작업 배치작업 batch 그룹

끝을 보장한다

방송사 인코딩디지털파일형태로 벼ㄴ환하는 작업

레이블셀렉터를 사용자가 지정하진 않는다

레이블을 잘못지정하면 레플리카셋이 관리하는 파드를

선택이 되어버릴수도 있다 -> 종료보장 -> 종료시켜버릴수있다

잘못된 레이블 지정하는걸 막는다

재시작 정책 설정

기본값 ALWAYS 종료됐는데  다시 시작? -> 오류

OnFailure / Never

-> 6번까지 재시작

 

파드를 세번 실행

하나만들어지고 종료 하나만들어지고 종료 하나만들어지고 종료

 

3번 세개가 동시에 실행

 

# YAML 파일 작성
vagrant@k-control:~$ cat ldh-job.yml 
apiVersion: batch/v1
kind: Job 
metadata:
  name: ldh-job
spec:
  template:
    metadata:
      labels:
        app: dh-ds
    spec:
      restartPolicy: OnFailure
      containers:
      - image: busybox
        name: ldh
        command: ["sleep", "20"]
        

# 잡 오브젝트 생성 
vagrant@k-control:~$ kubectl create -f ldh-job.yml
job.batch/ldh-job created
vagrant@k-control:~$ kubectl get job.batch
NAME      COMPLETIONS   DURATION   AGE
ldh-job   1/1           28s        59s
vagrant@k-control:~$ kubectl get pods
NAME               READY   STATUS      RESTARTS   AGE
ldh-job-kwxj5      0/1     Completed   0          69s
ldh-pod            1/1     Running     2          41h
ldh-pod-label      1/1     Running     1          22h
ldh-rs-exp-b5bhs   1/1     Running     1          16h
vagrant@k-control:~$ kubectl get pods
NAME               READY   STATUS      RESTARTS   AGE
ldh-job-kwxj5      0/1     Completed   0          87s
ldh-pod            1/1     Running     2          41h
ldh-pod-label      1/1     Running     1          22h
ldh-rs-exp-b5bhs   1/1     Running     1          16h
vagrant@k-control:~$ kubectl get pods
NAME               READY   STATUS      RESTARTS   AGE
ldh-job-kwxj5      0/1     Completed   0          109s
ldh-pod            1/1     Running     2          41h
ldh-pod-label      1/1     Running     1          22h
ldh-rs-exp-b5bhs   1/1     Running     1          16h
vagrant@k-control:~$ kubectl get pods
NAME               READY   STATUS      RESTARTS   AGE
ldh-job-kwxj5      0/1     Completed   0          111s
ldh-pod            1/1     Running     2          41h
ldh-pod-label      1/1     Running     1          22h
ldh-rs-exp-b5bhs   1/1     Running     1          16h
vagrant@k-control:~$ kubectl get pods
NAME               READY   STATUS      RESTARTS   AGE
ldh-job-kwxj5      0/1     Completed   0          113s
ldh-pod            1/1     Running     2          41h
ldh-pod-label      1/1     Running     1          22h
ldh-rs-exp-b5bhs   1/1     Running     1          16h
vagrant@k-control:~$ kubectl get pods
NAME               READY   STATUS      RESTARTS   AGE
ldh-job-kwxj5      0/1     Completed   0          116s
ldh-pod            1/1     Running     2          41h
ldh-pod-label      1/1     Running     1          22h
ldh-rs-exp-b5bhs   1/1     Running     1          16h
vagrant@k-control:~$ kubectl get pods
NAME               READY   STATUS      RESTARTS   AGE
ldh-job-kwxj5      0/1     Completed   0          2m7s
ldh-pod            1/1     Running     2          41h
ldh-pod-label      1/1     Running     1          22h
ldh-rs-exp-b5bhs   1/1     Running     1          16h
vagrant@k-control:~$ kubectl get job.batch
NAME      COMPLETIONS   DURATION   AGE
ldh-job   1/1           28s        2m26s
vagrant@k-control:~$ kubectl get job.batch
NAME      COMPLETIONS   DURATION   AGE
ldh-job   1/1           28s        2m33s
vagrant@k-control:~$ kubectl get job.batch
NAME      COMPLETIONS   DURATION   AGE
ldh-job   1/1           28s        2m48s
vagrant@k-control:~$ kubectl delete -f ldh-job.yml
job.batch "ldh-job" deleted
vagrant@k-control:~$ kubectl create -f ldh-job.yml
job.batch/ldh-job created
vagrant@k-control:~$ kubectl get job.batch
NAME      COMPLETIONS   DURATION   AGE
ldh-job   0/1           4s         4s
vagrant@k-control:~$ kubectl get pods
NAME               READY   STATUS    RESTARTS   AGE
ldh-job-59dm2      1/1     Running   0          9s
ldh-pod            1/1     Running   2          41h
ldh-pod-label      1/1     Running   1          23h
ldh-rs-exp-b5bhs   1/1     Running   1          16h
vagrant@k-control:~$ kubectl get pods
NAME               READY   STATUS    RESTARTS   AGE
ldh-job-59dm2      1/1     Running   0          13s
ldh-pod            1/1     Running   2          41h
ldh-pod-label      1/1     Running   1          23h
ldh-rs-exp-b5bhs   1/1     Running   1          16h
vagrant@k-control:~$ kubectl get pods
NAME               READY   STATUS    RESTARTS   AGE
ldh-job-59dm2      1/1     Running   0          15s
ldh-pod            1/1     Running   2          41h
ldh-pod-label      1/1     Running   1          23h
ldh-rs-exp-b5bhs   1/1     Running   1          16h
vagrant@k-control:~$ kubectl get pods
NAME               READY   STATUS    RESTARTS   AGE
ldh-job-59dm2      1/1     Running   0          16s
ldh-pod            1/1     Running   2          41h
ldh-pod-label      1/1     Running   1          23h
ldh-rs-exp-b5bhs   1/1     Running   1          16h
vagrant@k-control:~$ kubectl get pods
NAME               READY   STATUS    RESTARTS   AGE
ldh-job-59dm2      1/1     Running   0          17s
ldh-pod            1/1     Running   2          41h
ldh-pod-label      1/1     Running   1          23h
ldh-rs-exp-b5bhs   1/1     Running   1          16h
vagrant@k-control:~$ kubectl get pods
NAME               READY   STATUS    RESTARTS   AGE
ldh-job-59dm2      1/1     Running   0          19s
ldh-pod            1/1     Running   2          41h
ldh-pod-label      1/1     Running   1          23h
ldh-rs-exp-b5bhs   1/1     Running   1          16h
vagrant@k-control:~$ kubectl get pods
NAME               READY   STATUS    RESTARTS   AGE
ldh-job-59dm2      1/1     Running   0          20s
ldh-pod            1/1     Running   2          41h
ldh-pod-label      1/1     Running   1          23h
ldh-rs-exp-b5bhs   1/1     Running   1          16h
vagrant@k-control:~$ kubectl get pods
NAME               READY   STATUS    RESTARTS   AGE
ldh-job-59dm2      1/1     Running   0          20s
ldh-pod            1/1     Running   2          41h
ldh-pod-label      1/1     Running   1          23h
ldh-rs-exp-b5bhs   1/1     Running   1          16h
vagrant@k-control:~$ kubectl get pods
NAME               READY   STATUS    RESTARTS   AGE
ldh-job-59dm2      1/1     Running   0          21s
ldh-pod            1/1     Running   2          41h
ldh-pod-label      1/1     Running   1          23h
ldh-rs-exp-b5bhs   1/1     Running   1          16h
vagrant@k-control:~$ kubectl get pods
NAME               READY   STATUS    RESTARTS   AGE
ldh-job-59dm2      1/1     Running   0          22s
ldh-pod            1/1     Running   2          41h
ldh-pod-label      1/1     Running   1          23h
ldh-rs-exp-b5bhs   1/1     Running   1          16h
vagrant@k-control:~$ kubectl get pods
NAME               READY   STATUS    RESTARTS   AGE
ldh-job-59dm2      1/1     Running   0          22s
ldh-pod            1/1     Running   2          41h
ldh-pod-label      1/1     Running   1          23h
ldh-rs-exp-b5bhs   1/1     Running   1          16h
vagrant@k-control:~$ 
vagrant@k-control:~$ kubectl get pods
NAME               READY   STATUS    RESTARTS   AGE
ldh-job-59dm2      1/1     Running   0          23s
ldh-pod            1/1     Running   2          41h
ldh-pod-label      1/1     Running   1          23h
ldh-rs-exp-b5bhs   1/1     Running   1          16h
vagrant@k-control:~$ kubectl get pods
NAME               READY   STATUS    RESTARTS   AGE
ldh-job-59dm2      1/1     Running   0          24s
ldh-pod            1/1     Running   2          41h
ldh-pod-label      1/1     Running   1          23h
ldh-rs-exp-b5bhs   1/1     Running   1          16h
vagrant@k-control:~$ kubectl get pods
NAME               READY   STATUS      RESTARTS   AGE
ldh-job-59dm2      0/1     Completed   0          25s
ldh-pod            1/1     Running     2          41h
ldh-pod-label      1/1     Running     1          23h
ldh-rs-exp-b5bhs   1/1     Running     1          16h
vagrant@k-control:~$
vagrant@k-control:~$ cat ldh-job-comp.yml
apiVersion: batch/v1
kind: Job 
metadata:
  name: ldh-job-comp
spec:
  completions: 3
  template:
    metadata:
      labels:
        app: ldh-job-comp
    spec:
      restartPolicy: OnFailure
      containers:
      - image: busybox
        name: ldh
        command: ["sleep", "20"]

vagrant@k-control:~$ kubectl create -f ldh-job-comp.yml
job.batch/ldh-job-comp created
vagrant@k-control:~$ kubectl get job.batch
NAME           COMPLETIONS   DURATION   AGE
ldh-job        1/1           24s        5m10s
ldh-job-comp   0/3           8s         8s
vagrant@k-control:~$ kubectl get pods -watch
Error: unknown shorthand flag: 'a' in -atch
See 'kubectl get --help' for usage.
vagrant@k-control:~$ kubectl get pods --watch
NAME                 READY   STATUS              RESTARTS   AGE
ldh-job-59dm2        0/1     Completed           0          5m27s
ldh-job-comp-qct2n   0/1     Completed           0          25s
ldh-job-comp-z2xqc   0/1     ContainerCreating   0          0s
ldh-pod              1/1     Running             2          42h
ldh-pod-label        1/1     Running             1          23h
ldh-rs-exp-b5bhs     1/1     Running             1          16h
ldh-job-comp-qct2n   0/1     Completed           0          25s
ldh-job-comp-z2xqc   0/1     ContainerCreating   0          1s
ldh-job-comp-z2xqc   1/1     Running             0          5s
ldh-job-comp-z2xqc   0/1     Completed           0          24s
ldh-job-comp-tmv6v   0/1     Pending             0          0s
ldh-job-comp-tmv6v   0/1     Pending             0          0s
ldh-job-comp-tmv6v   0/1     ContainerCreating   0          0s
ldh-job-comp-z2xqc   0/1     Completed           0          24s
ldh-job-comp-tmv6v   0/1     ContainerCreating   0          1s
ldh-job-comp-tmv6v   1/1     Running             0          4s
ldh-job-comp-tmv6v   0/1     Completed           0          24s
ldh-job-comp-tmv6v   0/1     Completed           0          24s
^Cvagrant@k-control:~$ kubectl get job.batch
NAME           COMPLETIONS   DURATION   AGE
ldh-job        1/1           24s        7m12s
ldh-job-comp   3/3           73s        2m10s
vagrant@k-control:~$ kubectl delete -f ldh-job-comp.yml
job.batch "ldh-job-comp" deleted
vagrant@k-control:~$
vagrant@k-control:~$ cat ldh-job-comp.yml
apiVersion: batch/v1
kind: Job 
metadata:
  name: ldh-job-comp
spec:
  completions: 3
  template:
    metadata:
      labels:
        app: ldh-job-comp
    spec:
      restartPolicy: OnFailure
      containers:
      - image: busybox
        name: ldh
        command: ["sleep", "20"]

vagrant@k-control:~$ vi ldh-job-para.yml
vagrant@k-control:~$ kubectl create -f ldh-job-para.yml
job.batch/ldh-job-para created
vagrant@k-control:~$ kubectl get job.batch
NAME           COMPLETIONS   DURATION   AGE
ldh-job        1/1           24s        15m
ldh-job-para   0/3           5s         5s
vagrant@k-control:~$ kubectl get pods --watch
NAME                 READY   STATUS      RESTARTS   AGE
ldh-job-59dm2        0/1     Completed   0          16m
ldh-job-para-k5kk6   1/1     Running     0          12s
ldh-job-para-rch66   1/1     Running     0          12s
ldh-job-para-xkqz2   1/1     Running     0          12s
ldh-pod              1/1     Running     2          42h
ldh-pod-label        1/1     Running     1          23h
ldh-rs-exp-b5bhs     1/1     Running     1          17h
ldh-job-para-rch66   0/1     Completed   0          25s
ldh-job-para-rch66   0/1     Completed   0          25s
ldh-job-para-xkqz2   0/1     Completed   0          27s
ldh-job-para-xkqz2   0/1     Completed   0          27s
ldh-job-para-k5kk6   0/1     Completed   0          27s
ldh-job-para-k5kk6   0/1     Completed   0          27s
^Cvagrant@k-control:~$ kubectl get job.batch
NAME           COMPLETIONS   DURATION   AGE
ldh-job        1/1           24s        17m
ldh-job-para   3/3           27s        82s
vagrant@k-control:~$ kubectl delete -f ldh-job-para.yml
job.batch "ldh-job-para" deleted
vagrant@k-control:~$
vagrant@k-control:~$ cat ldh-job-atvdl.yml
apiVersion: batch/v1
kind: Job 
metadata:
  name: ldh-job-atvdl
spec:
  backoffLimit: 3
  activeDeadlineSeconds: 30
  template:
    metadata:
      labels:
        app: ldh-job-atvdl
    spec:
      restartPolicy: OnFailure
      containers:
      - image: busybox
        name: ldh
        command: ["sleep", "20"]

vagrant@k-control:~$ vi ldh-job-atvdl.yml
vagrant@k-control:~$ kubectl create -f ldh-job-atvdl.yml
Error from server (AlreadyExists): error when creating "ldh-job-atvdl.yml": jobs.batch "ldh-job-atvdl" already exists
vagrant@k-control:~$ kubectl delete -f ldh-job-atvdl.yml
job.batch "ldh-job-atvdl" deleted
vagrant@k-control:~$ kubectl create -f ldh-job-atvdl.yml
job.batch/ldh-job-atvdl created
vagrant@k-control:~$ kubectl get pods --watch
NAME                  READY   STATUS      RESTARTS   AGE
ldh-job-59dm2         0/1     Completed   0          33m
ldh-job-atvdl-8rz2b   1/1     Running     0          4s
ldh-pod               1/1     Running     2          42h
ldh-pod-label         1/1     Running     1          23h
ldh-rs-exp-b5bhs      1/1     Running     1          17h
ldh-job-atvdl-8rz2b   1/1     Terminating   0          30s
ldh-job-atvdl-8rz2b   1/1     Terminating   0          54s
ldh-job-atvdl-8rz2b   0/1     Terminating   0          54s
ldh-job-atvdl-8rz2b   0/1     Terminating   0          59s
ldh-job-atvdl-8rz2b   0/1     Terminating   0          59s
^[[A^Cvagrant@k-contrkubectl get job.batch
NAME            COMPLETIONS   DURATION   AGE
ldh-job         1/1           24s        34m
ldh-job-atvdl   0/1           85s        86s