192.168.0.0/16
모든 파드를 생성했을때 ip
/24 네트워크 호스트네트워크 254개
/16 네트워크 부여 256*256-2 =65534개
쿠버네티스를 설계할 때 파드 갯수 고려
파드에 아이피가 부여가 되는데
사용자가 직접 부여가능 spec에서
충돌이 나면 안된다
파드 아이피 조사를 해야겠다 ->현실적으로 불가능
ip 자동셋팅 권장
예측할 수 없다
예측해서 셋팅할수 없다
----> 괴롭,,
동적환경에서 예측이 안됨
그래서 서비스라는 리소스를 만든다!
기본적으로 프록시의 기능이 있다
중간에서 릴레이 역할을 한다
리버스프록시
a ------------->프록시(서버측에 프록시있음)------>b
클라이언트 프록시만 바라보고 통신 --->프록시가 릴레이 해준다(연결)---> 서버
로드밸런서
포워드프록시
클라이언트와 프록시가 같은 네트워크
c---->프록시(클라이언트쪽에 프록시 있음)-->허용되는것만 --> 서버로 넘겨줌
보안장비 형태의 성격
서비스 리소스는 리버스 프록시!!
프록시 + 로드밸런서 역할 해줍ㄴ다
coredns 클러스터 내부 dns
서비스에 주소가 부여가 된다 (고유주소)
->고유이름을 가질수있다
서비스이름은 예측이 가능하다!!
서비스의 노출포트노출포
로드벨런서의 포트
타겟포트 파드의포트
80
파드셀렉터
셀렉터로 이루어진다
파드의 레이블 을 셀렉팅 하고있ㄷ가하고있ㄷ하고있ㄷ
클러스터아이피는 외붕[서 접속 불가능
엔드포인트
-> 실제 파드들
서비스를만들만 같은이름의 엔드포인트가 만들어짐
실제 서비스에서 레이블되는 파드의 목록을 가지고 있다
테스트할 파드를 만들어 테스트
파드는 디테치 방식
-d 백그라운드 실행
네트워크 관련된거 다 설치해노ㅎ은 이미지
세션 어피니티
인증받은 세션정보 없어서
세션고정
쿠키 못본다 4게층밖에 못봄
l4 로드밸런서
세션고정 하지 않음
같은 클라이언트 ip눈 같은곳으로만 연결
응답하는 파드가 같다
멀티포트
vagrant@k-control:~$ vi ldh-svc.yml
vagrant@k-control:~$ cat ldh-svc.yml
apiVercion: v1
kind: Service
metadata:
name: ldh-svc
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: ldh-rs
vagrant@k-control:~$ cat ldh-rs.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: ldh-rs
spec:
replicas: 3
selector:
matchLabels:
app: dh-rc
template:
metadata:
labels:
app: dh-rc
spec:
containers:
- image: ghcr.io/c1t1d0s7/go-myweb
name: ldh
ports:
- containerPort: 8080
protocol: TCP
vagrant@k-control:~$ vi ldh-rs.yml
vagrant@k-control:~$ vi ldh-rs.yml
vagrant@k-control:~$ cat ldh-rs.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: ldh-rs
spec:
replicas: 3
selector:
matchLabels:
app: ldh-rs
template:
metadata:
labels:
app: ldh-rs
spec:
containers:
- image: ghcr.io/c1t1d0s7/go-myweb
name: ldh
ports:
- containerPort: 8080
protocol: TCP
vagrant@k-control:~$ kubectl create -f ldh-svc.yml
error: error validating "ldh-svc.yml": error validating data: apiVersion not set; if you choose to ignore these errors, turn validation off with --validate=false
vagrant@k-control:~$ vi ldh-svc.yml
vagrant@k-control:~$ vi ldh-svc.yml
vagrant@k-control:~$ kubectl create -f ldh-svc.yml
error: error validating "ldh-svc.yml": error validating data: apiVersion not set; if you choose to ignore these errors, turn validation off with --validate=false
vagrant@k-control:~$ vi ldh-svc.yml
vagrant@k-control:~$ kubectl create -f ldh-svc.yml
service/ldh-svc created
vagrant@k-control:~$ kubectl create -f ldh-rs.yml
replicaset.apps/ldh-rs created
vagrant@k-control:~$ kubectl get rs,po,svc
NAME DESIRED CURRENT READY AGE
replicaset.apps/ldh-rs 3 3 3 2m50s
NAME READY STATUS RESTARTS AGE
pod/ldh-rs-2dkt8 1/1 Running 0 2m50s
pod/ldh-rs-52kqm 1/1 Running 0 2m50s
pod/ldh-rs-f4sqr 1/1 Running 0 2m50s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d20h
service/ldh-svc ClusterIP 10.101.73.229 <none> 80/TCP 3m5s
vagrant@k-control:~$ kubectl describe svc
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.96.0.1
Port: https 443/TCP
TargetPort: 6443/TCP
Endpoints: 192.168.200.50:6443
Session Affinity: None
Events: <none>
Name: ldh-svc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=ldh-rs
Type: ClusterIP
IP: 10.101.73.229
Port: <unset> 80/TCP
TargetPort: 8080/TCP
Endpoints: 192.168.108.18:8080,192.168.169.84:8080,192.168.82.144:8080
Session Affinity: None
Events: <none>
vagrant@k-control:~$ kubectl describe svc ldh-svc
Name: ldh-svc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=ldh-rs
Type: ClusterIP
IP: 10.101.73.229
Port: <unset> 80/TCP
TargetPort: 8080/TCP
Endpoints: 192.168.108.18:8080,192.168.169.84:8080,192.168.82.144:8080
Session Affinity: None
Events: <none>
vagrant@k-control:~$ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ldh-rs-2dkt8 1/1 Running 0 6m 192.168.82.144 k-node1 <none> <none>
ldh-rs-52kqm 1/1 Running 0 6m 192.168.108.18 k-node2 <none> <none>
ldh-rs-f4sqr 1/1 Running 0 6m 192.168.169.84 k-node3 <none> <none>
vagrant@k-control:~$ kubectl scale rs ldh-rs --replicas=5
replicaset.apps/ldh-rs scaled
vagrant@k-control:~$ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ldh-rs-2dkt8 1/1 Running 0 6m44s 192.168.82.144 k-node1 <none> <none>
ldh-rs-52kqm 1/1 Running 0 6m44s 192.168.108.18 k-node2 <none> <none>
ldh-rs-9m5k4 1/1 Running 0 7s 192.168.82.145 k-node1 <none> <none>
ldh-rs-f4sqr 1/1 Running 0 6m44s 192.168.169.84 k-node3 <none> <none>
ldh-rs-vjp5n 1/1 Running 0 7s 192.168.169.85 k-node3 <none> <none>
vagrant@k-control:~$ kubectl describe svc ldh-svc
Name: ldh-svc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=ldh-rs
Type: ClusterIP
IP: 10.101.73.229
Port: <unset> 80/TCP
TargetPort: 8080/TCP
Endpoints: 192.168.108.18:8080,192.168.169.84:8080,192.168.169.85:8080 + 2 more...
Session Affinity: None
Events: <none>
vagrant@k-control:~$ kubectl describe ep ldh-svc
Name: ldh-svc
Namespace: default
Labels: <none>
Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2021-07-09T04:17:14Z
Subsets:
Addresses: 192.168.108.18,192.168.169.84,192.168.169.85,192.168.82.144,192.168.82.145
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 8080 TCP
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedToUpdateEndpoint 7m23s endpoint-controller Failed to update endpoint default/ldh-svc: Operation cannot be fulfilled on endpoints "ldh-svc": the object has been modified; please apply your changes to the latest version and try again
vagrant@k-control:~$ kubectl run nettool -it --image=ghcr.io/c1t1d0s7/network-multitool --rm bash
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: unable to upgrade connection: pod does not exist
pod "nettool" deleted
Error from server (NotFound): the server could not find the requested resource ( pods/log nettool)
vagrant@k-control:~$ kubectl run nettool -it --image=ghcr.io/c1t1d0s7/network-multitool --rm bash
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: unable to upgrade connection: pod does not exist
pod "nettool" deleted
Error from server (NotFound): the server could not find the requested resource ( pods/log nettool)
vagrant@k-control:~$ kubectl run nettool -it --image=ghcr.io/c1t1d0s7/network-multitool --rm bash
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: unable to upgrade connection: pod does not exist
pod "nettool" deleted
Error from server (NotFound): the server could not find the requested resource ( pods/log nettool)
vagrant@k-control:~$ kubectl run nettool -it --image=ghcr.io/c1t1d0s7/network-multitool --rm bash^C
vagrant@k-control:~$ kubectl run nettool -it --image=ghcr.io/c1t1d0s7/network-multitool --rm bash
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: unable to upgrade connection: pod does not exist
pod "nettool" deleted
Error from server (NotFound): the server could not find the requested resource ( pods/log nettool)
vagrant@k-control:~$ vi ldh-svc.yml
vagrant@k-control:~$ kubectl get services ldh-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ldh-svc ClusterIP 10.101.73.229 <none> 80/TCP 15m
vagrant@k-control:~$ vi ldh-rs.yml
vagrant@k-control:~$ kubectl create -f ldh-rs.yml
Error from server (AlreadyExists): error when creating "ldh-rs.yml": replicasets.apps "ldh-rs" already exists
vagrant@k-control:~$ kubectl get services ldh-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ldh-svc ClusterIP 10.101.73.229 <none> 80/TCP 18m
vagrant@k-control:~$ kubectl run nettool -it --image=ghcr.io/c1t1d0s7/network-multitool --rm bash
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: unable to upgrade connection: pod does not exist
pod "nettool" deleted
Error from server (NotFound): the server could not find the requested resource ( pods/log nettool)
vagrant@k-control:~$ kubectl run ldh -it --image=ghcr.io/c1t1d0s7/network-multitool --rm bash
If you don't see a command prompt, try pressing enter.
bash-5.1# curl http://192.168.108.18
curl: (7) Failed to connect to 192.168.108.18 port 80: Connection refused
bash-5.1# curl http://10.101.73.229
Hello World!
ldh-rs-9m5k4
bash-5.1# curl http://10.101.73.229
Hello World!
ldh-rs-52kqm
bash-5.1#
bash-5.1# curl http://10.101.73.229
Hello World!
ldh-rs-f4sqr
bash-5.1# curl http://10.101.73.229
Hello World!
ldh-rs-vjp5n
bash-5.1# curl http://10.101.73.229
Hello World!
ldh-rs-vjp5n
bash-5.1# curl http://10.101.73.229
서비스탐색
이름을 가지고 통신
서비스를 읽어 자동으로 환경변수 등록
파드가 만들어진후에 환경변수로 등록?
ㄴㄴ 방법없음
나중에 서비스가 바뀌면 등록 안됨
--> 그래서~~! dns를 이용한 탐색을 한다
어떤네임스페이스에 있는지 지정지
서비스랑 파드가 다른곳에 있으면
네임스페이스를 지정 해줘야한다
파드를 다른 네임스페이스에 띄워서
서비스 이름가지고 통신 됨? ㄴㄴ
다르며ㅑㄴ 안됨
vagrant@k-control:~$ cat ldh-svc-ses-aff.yml
apiVersion: v1
kind: Service
metadata:
name: ldh-svc-ses-aff
spec:
sessionAffinity: ClientIP
ports:
- port: 80
targetPort: 8080
selector:
app: ldh-rs
~
vagrant@k-control:~$ kubectl create -f ldh-svc-ses-aff.yaml
error: the path "ldh-svc-ses-aff.yaml" does not exist
vagrant@k-control:~$ kubectl create -f ldh-svc-ses-aff.yml
error: error parsing ldh-svc-ses-aff.yml: error converting YAML to JSON: yaml: line 13: could not find expected ':'
vagrant@k-control:~$ vi ldh-svc-ses-aff.yml
vagrant@k-control:~$ vi ldh-svc-ses-aff.yml
vagrant@k-control:~$ kubectl create -f ldh-svc-ses-aff.yml
error: error parsing ldh-svc-ses-aff.yml: error converting YAML to JSON: yaml: line 13: could not find expected ':'
vagrant@k-control:~$ vi ldh-svc-ses-aff.yml
vagrant@k-control:~$ kubectl create -f ldh-svc-ses-aff.yml
error: error parsing ldh-svc-ses-aff.yml: error converting YAML to JSON: yaml: line 13: could not find expected ':'
vagrant@k-control:~$ kubectl run ldh -it --image=ghcr.io/c1t1d0s7/network-multitool --rm bash
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: unable to upgrade connection: pod does not exist
pod "ldh" deleted
Error from server (NotFound): the server could not find the requested resource ( pods/log ldh)
vagrant@k-control:~$ kubectl run dh -it --image=ghcr.io/c1t1d0s7/network-multitool --rm bashIf you don't see a command prompt, try pressing enter.
bash-5.1# env
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_PORT=443
LDH_SVC_PORT_80_TCP_ADDR=10.101.73.229
HOSTNAME=dh
PWD=/
LDH_SVC_PORT_80_TCP=tcp://10.101.73.229:80
LDH_SVC_PORT_80_TCP_PORT=80
HOME=/root
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
LDH_SVC_PORT=tcp://10.101.73.229:80
TERM=xterm
SHLVL=1
LDH_SVC_PORT_80_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
LDH_SVC_SERVICE_PORT=80
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
LDH_SVC_SERVICE_HOST=10.101.73.229
_=/usr/bin/env
bash-5.1# kubectl get all -n kube-system -l k8s-app=kube-dns
bash: kubectl: command not found
bash-5.1# ^Cbectl get all -n kube-system -l k8s-app=kube-dns
bash-5.1# ^C
bash-5.1# exit
exit
Session ended, resume using 'kubectl attach dh -c dh -i -t' command when the pod is running
pod "dh" deleted
vagrant@k-control:~$ kubectl get all -n kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
pod/coredns-f9fd979d6-hlpf4 1/1 Running 2 2d2h
pod/coredns-f9fd979d6-scz22 1/1 Running 2 2d2h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 2d21h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/coredns 2/2 2 2 2d21h
NAME DESIRED CURRENT READY AGE
replicaset.apps/coredns-66bff467f8 0 0 0 2d21h
replicaset.apps/coredns-f9fd979d6 2 2 2 2d2h
vagrant@k-control:~$ kubectl run nettool -it --image=ghcr.io/c1t1d0s7/network-multitoos --rm bash
pod "nettool" deleted
error: timed out waiting for the condition
vagrant@k-control:~$ kubectl run dhdh -it --image=ghcr.io/c1t1d0s7/network-multitoos --rm bash
pod "dhdh" deleted
error: timed out waiting for the condition
vagrant@k-control:~$ kubectl run h -it --image=ghcr.io/c1t1d0s7/network-multitoos --rm bash
^[[A^[[A^[[A^Cvagrant@k-control:~$ kubectl run h -it --image=ghcr.io/c1t1d0s7/network-multitvagrant@k-control:~$ kubectl run hd -it --image=ghcr.io/c1t1d0s7/network-multitools --rm bash
pod "hd" deleted
error: timed out waiting for the condition
vagrant@k-control:~$ kubectl run ldh -it --image=ghcr.io/c1t1d0s7/network-multitool --rm bash
If you don't see a command prompt, try pressing enter.
bash-5.1# cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
bash-5.1# curl http:// ldh-svc
curl: (3) URL using bad/illegal format or missing URL
Hello World!
ldh-rs-vjp5n
bash-5.1# curl http:// ldh-svc.default
curl: (3) URL using bad/illegal format or missing URL
Hello World!
ldh-rs-52kqm
bash-5.1# curl http:// ldh-svc.default.svc
curl: (3) URL using bad/illegal format or missing URL
Hello World!
ldh-rs-f4sqr
bash-5.1# curl http:// ldh-svc.default.svc.cluster
curl: (3) URL using bad/illegal format or missing URL
curl: (6) Could not resolve host: ldh-svc.default.svc.cluster
bash-5.1# curl http:// ldh-svc.default.svc.cluster.local
curl: (3) URL using bad/illegal format or missing URL
Hello World!
ldh-rs-52kqm
bash-5.1# exit
exit
Session ended, resume using 'kubectl attach ldh -c ldh -i -t' command when the pod is running
pod "ldh" deleted
vagrant@k-control:~$ kubectl run hd -it --image=ghcr.io/c1t1d0s7/network-multitools --rm bash^C
vagrant@k-control:~$ kubectl get demonsets.apps -l k8s-app=kube-dns -n kube-system
error: the server doesn't have a resource type "demonsets"
vagrant@k-control:~$ kubectl get daemonsets.apps -l k8s-app=kube-dns -n kube-system
No resources found in kube-system namespace.
vagrant@k-control:~$ kubectl get daemonsets.apps -l k8s-app=kube-dns -n kube-system
No resources found in kube-system namespace.
vagrant@k-control:~$
vagrant@k-control:~$ kubectl delete replicasets.apps,service --all
replicaset.apps "ldh-rs" deleted
service "kubernetes" deleted
service "ldh-svc" deleted
vagrant@k-control:~$
https://metallb.universe.tf/installation/
ㅇhttps://metallb.universe.tf/configuration/