constrain pod resources:
https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
to manually exceed the ephermal storage limit run:
1 | fallocate -l 10G /bigfile |
the pod will be evicted, volume and container will be purged, but the record is not automatically removed.
to cleanup the mess one may run a scheduled job like:
1 | while true; |
or configure implementation dependent kube-controller-manager
startup argument terminated-pod-gc-threshold=1
.
for k3s
edit /etc/rancher/k3s/config.yaml
like:
1 | kube-controller-manager-arg: |
for microk8s
, edit /var/snap/microk8s/current/args/kube-controller-manager
references:
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/
https://docs.k3s.io/security/hardening-guide
https://github.com/k3s-io/k3s/issues/10448
make sure you have a networkpolicy enabled cni first. usually included but be careful with minikube since that is a different story
apply these configs with kubectl apply -f <config_path>
interact with kubectl exec <pod_name> -it -- /bin/sh
deployment config:
1 | apiVersion: apps/v1 |
network policy config:
1 | apiVersion: networking.k8s.io/v1 |