vagrant@k-control:~$ cat ldh-svc-extname.yml
apiVersion: v1
kind: Service
metadata:
name: gooogle
spec:
type: ExternalName
externalName: www.google.com
vagrant@k-control:~$ kubectl create -f ldh-svc-extname.yml
service/gooogle created
vagrant@k-control:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gooogle ExternalName <none> www.google.com <none> 12s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 115m
ldh-svc-lb LoadBalancer 10.99.132.5 192.168.200.200 80:32268/TCP 7m17s
ldh-svc-np NodePort 10.99.90.246 <none> 80:31521/TCP 67m
vagrant@k-control:~$ kubectl run dh -it --image=ghcr.io/c1t1d0s7/network-multitool --rm bash
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: unable to upgrade connection: pod does not exist
pod "dh" deleted
Error from server (NotFound): the server could not find the requested resource ( pods/log dh)
vagrant@k-control:~$ kubectl run ldh -it --image=ghcr.io/c1t1d0s7/network-multitool --rm bash
If you don't see a command prompt, try pressing enter.
bash-5.1# host gooogle
gooogle.default.svc.cluster.local is an alias for www.google.com.
www.google.com has address 172.217.27.68
www.google.com has IPv6 address 2404:6800:4004:807::2004
bash-5.1# eixt
bash: eixt: command not found
bash-5.1# ezit
bash: ezit: command not found
bash-5.1# exit
exit
Session ended, resume using 'kubectl attach ldh -c ldh -i -t' command when the pod is running
pod "ldh" deleted
vagrant@k-control:~$
agrant@k-control:~$ vi ldh-svc-lb.yml
vagrant@k-control:~$ kubectl create -f ldh-svc-lb.yaml
error: the path "ldh-svc-lb.yaml" does not exist
vagrant@k-control:~$ kubectl create -f ldh-svc-lb.yml
service/ldh-svc-lb created
vagrant@k-control:~$ kubectl get svc,ep
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 108m
service/ldh-svc-lb LoadBalancer 10.99.132.5 <pending> 80:32268/TCP 24s
service/ldh-svc-np NodePort 10.99.90.246 <none> 80:31521/TCP 60m
NAME ENDPOINTS AGE
endpoints/kubernetes 192.168.200.50:6443 108m
endpoints/ldh-svc-lb 192.168.108.22:8080,192.168.169.96:8080,192.168.82.154:8080 24s
endpoints/ldh-svc-np 192.168.108.22:8080,192.168.169.96:8080,192.168.82.154:8080 60m
vagrant@k-control:~$ kubectl get svc,ep
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 108m
service/ldh-svc-lb LoadBalancer 10.99.132.5 <pending> 80:32268/TCP 47s
service/ldh-svc-np NodePort 10.99.90.246 <none> 80:31521/TCP 60m
NAME ENDPOINTS AGE
endpoints/kubernetes 192.168.200.50:6443 108m
endpoints/ldh-svc-lb 192.168.108.22:8080,192.168.169.96:8080,192.168.82.154:8080 47s
endpoints/ldh-svc-np 192.168.108.22:8080,192.168.169.96:8080,192.168.82.154:8080 60m
vagrant@k-control:~$ cd ~/addon/metallb/
vagrant@k-control:~/addon/metallb$ kubectl create -f configmap.yaml
configmap/config created
vagrant@k-control:~/addon/metallb$ cd~
Command 'cd~' not found, did you mean:
command 'cdw' from deb cdw (0.8.1-1build4)
command 'cdp' from deb irpas (0.10-7)
command 'cde' from deb cde (0.1+git9-g551e54d-1.1build1)
command 'cdi' from deb cdo (1.9.9~rc1-1)
command 'cd5' from deb cd5 (0.1-4)
command 'cdo' from deb cdo (1.9.9~rc1-1)
command 'cdb' from deb tinycdb (0.78build1)
Try: apt install <deb name>
vagrant@k-control:~/addon/metallb$ cd ~
vagrant@k-control:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 109m
ldh-svc-lb LoadBalancer 10.99.132.5 192.168.200.200 80:32268/TCP 98s
ldh-svc-np NodePort 10.99.90.246 <none> 80:31521/TCP 61m
vagrant@k-control:~$ cat ldh-svc-lb.yml
apiVersion: v1
kind: Service
metadata:
name: ldh-svc-lb
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: ldh-rs
vagrant@k-control:~$ vi ldh-svc-extname.yml
vagrant@k-control:~$ cat ldh-svc-extname.yml
apiVersion: v1
kind: Service
metadata:
name: gooogle
spec:
type: ExternalName
externalName: www.google.com
vagrant@k-control:~$ kubectl create -f ldh-svc-extname.yml
service/gooogle created
vagrant@k-control:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gooogle ExternalName <none> www.google.com <none> 12s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 115m
ldh-svc-lb LoadBalancer 10.99.132.5 192.168.200.200 80:32268/TCP 7m17s
ldh-svc-np NodePort 10.99.90.246 <none> 80:31521/TCP 67m
vagrant@k-control:~$ kubectl run dh -it --image=ghcr.io/c1t1d0s7/network-multitool --rm bash
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: unable to upgrade connection: pod does not exist
pod "dh" deleted
Error from server (NotFound): the server could not find the requested resource ( pods/log dh)
vagrant@k-control:~$ kubectl run ldh -it --image=ghcr.io/c1t1d0s7/network-multitool --rm bash
If you don't see a command prompt, try pressing enter.
bash-5.1# host gooogle
gooogle.default.svc.cluster.local is an alias for www.google.com.
www.google.com has address 172.217.27.68
www.google.com has IPv6 address 2404:6800:4004:807::2004
bash-5.1# eixt
bash: eixt: command not found
bash-5.1# ezit
bash: ezit: command not found
bash-5.1# exit
exit
Session ended, resume using 'kubectl attach ldh -c ldh -i -t' command when the pod is running
pod "ldh" deleted
vagrant@k-control:~$ vi ldh-rs-readiness.yml
vagrant@k-control:~$ cat ldh-rs
cat: ldh-rs: No such file or directory
vagrant@k-control:~$ cat ldh-rs.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: ldh-rs
spec:
replicas: 3
selector:
matchLabels:
app: ldh-rs
template:
metadata:
labels:
app: ldh-rs
spec:
containers:
- image: ghcr.io/c1t1d0s7/go-myweb
name: ldh
ports:
- containerPort: 8080
protocol: TCP
vagrant@k-control:~$ vi ldh-rs-readiness.yml
vagrant@k-control:~$ cat ldh-rs-readiness.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: ldh-rs-readiness
spec:
replicas: 3
selector:
matchLabels:
app: ldh-rs-readiness
template:
metadata:
labels:
app: ldh-rs-readiness
spec:
containers:
- image: ghcr.io/c1t1d0s7/go-myweb
redinessProbe:
exec:
command:
- ls
- /var/ready
name: ldh
ports:
- containerPort: 8080
protocol: TCP
vagrant@k-control:~$ kubectl create -f ldh-rs-readiness.yml
error: error validating "ldh-rs-readiness.yml": error validating data: ValidationError(ReplicaSet.spec.template.spec.containers[0]): unknown field "redinessProbe" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
vagrant@k-control:~$ vi ldh-rs-readiness.yml
vagrant@k-control:~$ cat ldh-rs-readiness.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: ldh-rs-readiness
spec:
replicas: 3
selector:
matchLabels:
app: ldh-rs-readiness
template:
metadata:
labels:
app: ldh-rs-readiness
spec:
containers:
- image: ghcr.io/c1t1d0s7/go-myweb
readinessProbe:
exec:
command:
- ls
- /var/ready
name: ldh
ports:
- containerPort: 8080
protocol: TCP
vagrant@k-control:~$ kubectl create -f ldh-rs-readiness.yml
replicaset.apps/ldh-rs-readiness created
vagrant@k-control:~$ kubectl get rs,po
NAME DESIRED CURRENT READY AGE
replicaset.apps/ldh-rs 3 3 3 65m
replicaset.apps/ldh-rs-readiness 3 3 0 15s
NAME READY STATUS RESTARTS AGE
pod/h 0/1 ImagePullBackOff 0 152m
pod/ldh-rs-5pbt4 1/1 Running 0 65m
pod/ldh-rs-gxjs6 1/1 Running 0 65m
pod/ldh-rs-readiness-9dh6j 0/1 Running 0 15s
pod/ldh-rs-readiness-qqcrl 0/1 Running 0 15s
pod/ldh-rs-readiness-zgqc4 0/1 Running 0 15s
pod/ldh-rs-wvwx5 1/1 Running 0 65m
vagrant@k-control:~$ cat ldh-svc-extname.yml
apiVersion: v1
kind: Service
metadata:
name: gooogle
spec:
type: ExternalName
vagrant@k-control:~$ vi ldh-svc-readiness.yml
vagrant@k-control:~$ cat ldh-svc
ldh-svc-extname.yml ldh-svc-np.yml ldh-svc-ses-aff.yml
ldh-svc-lb.yml ldh-svc-readiness.yml ldh-svc.yml
vagrant@k-control:~$ cat ldh-svc-readiness.yml
apiVersion: v1
kind: Service
metadata:
name: ldh-svc-readiness
spec:
ports:
- port:80
targetPort: 8080
selector:
app: ldh-rs-readiness
vagrant@k-control:~$ kubectl get rs,po,svc,ep
NAME DESIRED CURRENT READY AGE
replicaset.apps/ldh-rs 3 3 3 71m
replicaset.apps/ldh-rs-readiness 3 3 0 6m20s
NAME READY STATUS RESTARTS AGE
pod/h 0/1 ImagePullBackOff 0 158m
pod/ldh-rs-5pbt4 1/1 Running 0 71m
pod/ldh-rs-gxjs6 1/1 Running 0 71m
pod/ldh-rs-readiness-9dh6j 0/1 Running 0 6m20s
pod/ldh-rs-readiness-qqcrl 0/1 Running 0 6m20s
pod/ldh-rs-readiness-zgqc4 0/1 Running 0 6m20s
pod/ldh-rs-wvwx5 1/1 Running 0 71m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gooogle ExternalName <none> www.google.com <none> 12m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 127m
service/ldh-svc-lb LoadBalancer 10.99.132.5 192.168.200.200 80:32268/TCP 19m
service/ldh-svc-np NodePort 10.99.90.246 <none> 80:31521/TCP 79m
NAME ENDPOINTS AGE
endpoints/kubernetes 192.168.200.50:6443 127m
endpoints/ldh-svc-lb 192.168.108.22:8080,192.168.169.96:8080,192.168.82.154:8080 19m
endpoints/ldh-svc-np 192.168.108.22:8080,192.168.169.96:8080,192.168.82.154:8080 79m
vagrant@k-control:~$ kubectl get rs,po,svc,ep
NAME DESIRED CURRENT READY AGE
replicaset.apps/ldh-rs 3 3 3 73m
replicaset.apps/ldh-rs-readiness 3 3 0 7m35s
NAME READY STATUS RESTARTS AGE
pod/h 0/1 ErrImagePull 0 159m
pod/ldh-rs-5pbt4 1/1 Running 0 73m
pod/ldh-rs-gxjs6 1/1 Running 0 73m
pod/ldh-rs-readiness-9dh6j 0/1 Running 0 7m35s
pod/ldh-rs-readiness-qqcrl 0/1 Running 0 7m35s
pod/ldh-rs-readiness-zgqc4 0/1 Running 0 7m35s
pod/ldh-rs-wvwx5 1/1 Running 0 73m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gooogle ExternalName <none> www.google.com <none> 13m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 128m
service/ldh-svc-lb LoadBalancer 10.99.132.5 192.168.200.200 80:32268/TCP 20m
service/ldh-svc-np NodePort 10.99.90.246 <none> 80:31521/TCP 80m
NAME ENDPOINTS AGE
endpoints/kubernetes 192.168.200.50:6443 128m
endpoints/ldh-svc-lb 192.168.108.22:8080,192.168.169.96:8080,192.168.82.154:8080 20m
endpoints/ldh-svc-np 192.168.108.22:8080,192.168.169.96:8080,192.168.82.154:8080 80m
vagrant@k-control:~$ kubectl create -f ldh-svc-readiness.yml
error: error parsing ldh-svc-readiness.yml: error converting YAML to JSON: yaml: line 8: mapping values are not allowed in this context
vagrant@k-control:~$ vi ldh-svc-readiness.yml
vagrant@k-control:~$ kubectl create -f ldh-svc-readiness.yml
service/ldh-svc-readiness created
vagrant@k-control:~$ kubectl get rs,po,svc,ep
NAME DESIRED CURRENT READY AGE
replicaset.apps/ldh-rs 3 3 3 74m
replicaset.apps/ldh-rs-readiness 3 3 0 8m36s
NAME READY STATUS RESTARTS AGE
pod/h 0/1 ImagePullBackOff 0 160m
pod/ldh-rs-5pbt4 1/1 Running 0 74m
pod/ldh-rs-gxjs6 1/1 Running 0 74m
pod/ldh-rs-readiness-9dh6j 0/1 Running 0 8m36s
pod/ldh-rs-readiness-qqcrl 0/1 Running 0 8m36s
pod/ldh-rs-readiness-zgqc4 0/1 Running 0 8m36s
pod/ldh-rs-wvwx5 1/1 Running 0 74m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gooogle ExternalName <none> www.google.com <none> 14m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 129m
service/ldh-svc-lb LoadBalancer 10.99.132.5 192.168.200.200 80:32268/TCP 21m
service/ldh-svc-np NodePort 10.99.90.246 <none> 80:31521/TCP 81m
service/ldh-svc-readiness ClusterIP 10.98.194.197 <none> 80/TCP 5s
NAME ENDPOINTS AGE
endpoints/kubernetes 192.168.200.50:6443 129m
endpoints/ldh-svc-lb 192.168.108.22:8080,192.168.169.96:8080,192.168.82.154:8080 21m
endpoints/ldh-svc-np 192.168.108.22:8080,192.168.169.96:8080,192.168.82.154:8080 81m
endpoints/ldh-svc-readiness 5s
vagrant@k-control:~$ kubectl run ldh -it --image=ghcr.io/c1t1d0s7/network-multitool --rm bash
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: unable to upgrade connection: pod does not exist
pod "ldh" deleted
Error from server (NotFound): the server could not find the requested resource ( pods/log ldh)
vagrant@k-control:~$ kubectl run ld -it --image=ghcr.io/c1t1d0s7/network-multitool --rm bash
If you don't see a command prompt, try pressing enter.
bash-5.1# kubectl exec ldh-rs-readiness-zgqc4 -- touch /var/ready
bash: kubectl: command not found
bash-5.1# exit
exit
Session ended, resume using 'kubectl attach ld -c ld -i -t' command when the pod is running
pod "ld" deleted
vagrant@k-control:~$ kubectl exec ldh-rs-readiness-zgqc4 -- touch /var/ready
OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: "touch": executable file not found in $PATH: unknown
vagrant@k-control:~$ kubectl exec ldh-rs-readiness-zgqc4 -- sh
OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: "sh": executable file not found in $PATH: unknown
command terminated with exit code 126
vagrant@k-control:~$ kubectl exec -it ldh-rs-readiness-zgqc4 -- touch /var/ready
OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: "touch": executable file not found in $PATH: unknown
command terminated with exit code 126
vagrant@k-control:~$ kubectl exec -it ldh-rs-readiness-zgqc4 -- sh
OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: "sh": executable file not found in $PATH: unknown
command terminated with exit code 126
vagrant@k-control:~$ kubectl run ld -it --image=ghcr.io/c1t1d0s7/netvagrant@k-control:~$ kubectl exec ldh-rs-readiness-zgqc4 -- touch /var/ready
OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: "touch": executable file not found in $PATH: unknown
command terminated with exit code 126
vagrant@k-control:~$ vu
vagrant@k-control:~$ cat ldh-svc-headless.yml
apiVersion: v1
kind: Service
metadata:
name: ldh-svc-headless
spec:
type: None
ports:
- port: 80
targetPort: 8080
selector:
app: ldh-rs-headless
vagrant@k-control:~$ cat ldh-rs
.bash_history ldh-cj.yml ldh-rs-exp2.yml
.bash_logout ldh-ds.yml ldh-rs-readiness.yml
.bashrc ldh-job-atvdl.yml ldh-rs.yml
.cache/ ldh-job-comp.yml ldh-svc-extname.yml
.kube/ ldh-job-para.yml ldh-svc-headless.yml
.profile ldh-job.yml ldh-svc-lb.yml
.ssh/ ldh-pod-label.yaml ldh-svc-np.yml
.viminfo ldh-pod.yaml ldh-svc-readiness.yml
.vimrc ldh-rc.yml ldh-svc-ses-aff.yml
addon/ ldh-rs-exp.yml ldh-svc.yml
vagrant@k-control:~$ cat ldh-rs.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: ldh-rs
spec:
replicas: 3
selector:
matchLabels:
app: ldh-rs
template:
metadata:
labels:
app: ldh-rs
spec:
containers:
- image: ghcr.io/c1t1d0s7/go-myweb
name: ldh
ports:
- containerPort: 8080
protocol: TCP
vagrant@k-control:~$ vi ldh-rs-headless.yml
vagrant@k-control:~$ cat ldh-rs-headless.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: ldh-rs-headless
spec:
replicas: 3
selector:
matchLabels:
app: ldh-rs-headless
template:
metadata:
labels:
app: ldh-rs-headless
spec:
containers:
- image: ghcr.io/c1t1d0s7/go-myweb
name: ldh
ports:
- containerPort: 8080
protocol: TCP
vagrant@k-control:~$ kubectl create -f ldh-rs-headless.yml
replicaset.apps/ldh-rs-headless created
vagrant@k-control:~$ kubectl create -f ldh-svc-headless.yml
The Service "ldh-svc-headless" is invalid: spec.type: Unsupported value: "None": supported values: "ClusterIP", "ExternalName", "LoadBalancer", "NodePort"
vagrant@k-control:~$ cat ldh-svc-headless.yml
apiVersion: v1
kind: Service
metadata:
name: ldh-svc-headless
spec:
type: None
ports:
- port: 80
targetPort: 8080
selector:
app: ldh-rs-headless
vagrant@k-control:~$ vi ldh-svc-headless.yml
vagrant@k-control:~$ kubectl create -f ldh-svc-headless.yml
service/ldh-svc-headless created
vagrant@k-control:~$ kubectl get service ldh-svc-headless
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ldh-svc-headless ClusterIP None <none> 80/TCP 22s
vagrant@k-control:~$ kubectl grt endpoints ldh-svc-headless
Error: unknown command "grt" for "kubectl"
Did you mean this?
set
get
Run 'kubectl --help' for usage.
vagrant@k-control:~$ kubectl get endpoints ldh-svc-headless
NAME ENDPOINTS AGE
ldh-svc-headless 192.168.108.24:8080,192.168.169.100:8080,192.168.82.160:8080 54s
vagrant@k-control:~$ kubectl run ldh -it --image=ghcr.io/c1t1d0s7/network-multitool --rm bash
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: unable to upgrade connection: pod does not exist
pod "ldh" deleted
Error from server (NotFound): the server could not find the requested resource ( pods/log ldh)
vagrant@k-control:~$ kubectl run ee -it --image=ghcr.io/c1t1d0s7/network-multitool --rm bash
If you don't see a command prompt, try pressing enter.
bash-5.1# host ldh-svc-headless
ldh-svc-headless.default.svc.cluster.local has address 192.168.169.100
ldh-svc-headless.default.svc.cluster.local has address 192.168.82.160
ldh-svc-headless.default.svc.cluster.local has address 192.168.108.24
bash-5.1# exit
exit
Session ended, resume using 'kubectl attach ee -c ee -i -t' command when the pod is running
pod "ee" deleted
vagrant@k-control:~$ kubectl delte -f ldh-svc-headless.yml
Error: unknown command "delte" for "kubectl"
Did you mean this?
delete
Run 'kubectl --help' for usage.
vagrant@k-control:~$ kubectl delete -f ldh-svc-headless.yml
service "ldh-svc-headless" deleted
vagrant@k-control:~$ kubectl delete -f ldh-rs-headless.yml
replicaset.apps "ldh-rs-headless" deleted
vagrant@k-control:~$