Autonomous Machines & Society.

2024-07-24
Setting Up Isolated Pod Proxy With Clash And Vpn In Kubernetes

k8s isolated pod proxy setup

you need to disable intranet access with NetworkPolicy

find more info about intranet ranges here:

https://github.com/langgenius/dify/blob/main/docker/ssrf_proxy/squid.conf.template

http://www.squid-cache.org/Doc/config/acl/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-intranet-egress
spec:
podSelector:
matchLabels:
app: hello-world
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: ::/0
except:
- fc00::/7
- fe80::/10
- ipBlock:
cidr: 0.0.0.0/0
except:
- 0.0.0.0/8
- 10.0.0.0/8
- 100.64.0.0/10
- 169.254.0.0/16
- 172.16.0.0/12
- 192.168.0.0/16

if you do not want to use tun routing and rely on application specific proxy adaptors, you can export environment variables at:

  • /etc/profile or /etc/profile.d/proxy.sh

  • ~/.bashrc


use clash tun mode by config.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
interface-name: en0 # 与 `tun.auto-detect-interface` 冲突
tun:
enable: true
stack: system # or gvisor
# dns-hijack:
# - 8.8.8.8:53
# - tcp://8.8.8.8:53
# - any:53
# - tcp://any:53
auto-route: true # manage `ip route` and `ip rules`
auto-redir: true # manage nftable REDIRECT
auto-detect-interface: true # 与 `interface-name` 冲突

ref:

https://clash.wiki/premium/tun-device.html


by the most you would create a tun device, then route all traffic to the device after you have installed necessary softwares.

1
2
3
4
5
6
ip tuntap add mode tun dev tun0
ip addr add 198.18.0.1/15 dev tun0
ip link set dev tun0 up
# download tun2socks
# https://github.com/xjasonlyu/tun2socks

you need to use the default gateway otherwise you will not be able to reach the dns.

the default gateway will differ from nodes. it is recommended to fetch it from ip route

for ubuntu containers you would run apt install iproute2 first

1
2
3
4
5
6
7
8
9
10
DNS_IP=8.8.8.8
GATEWAY_IP=$(ip route | grep default | grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b")
DEFAULT_ROUTE=$(ip route | grep default | sed -E 's/( metric.*)?$/ metric 100/g;')
DEFAULT_NET_DEVICE=$(ip route | grep default | grep -oP '(?<=dev )\w+')
# run with root privilege
ip route delete default
ip route add $DEFAULT_ROUTE
ip route add default dev via 198.18.0.1 tun0
ip route add $DNS_IP via $GATEWAY_IP dev $DEFAULT_NET_DEVICE

then launch tun2socks

1
2
./tun2socks -interface $DEFAULT_NET_DEVICE -device tun://tun0 -proxy <proxy_protocol>://<proxy_address>

to disable the tun network you can run:

1
2
ip route delete default dev tun0

you need to configure the dnsPolicy to None in manifest otherwise it would add the cluster dns ip to /etc/resolv.conf, slowing down the lookup process and making it very hard to change during the runtime.

not every domain shall be reached via public proxies, unless it is graranteed to not be discovered by the gfw.

the full network isolation scheme can be shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
internet interface
|
intranet-isolated proxy server (hysteria)
|
cluster service with clusterIP
|
isolated socks5 server with only access
to the hysteria service clusterIP and port
|
local socks5 service with clusterIP
|
isolated tun-routed pod with only access
to socks5 service clusterIP and port

Read More

2024-07-24
Installing And Configuring Kubevirt On Microk8S

install kubevirt

do not install it on microk8s


install windows on kubevirt:

https://charlottemach.com/2020/11/03/windows-kubevirt-k3s.html


VirtualMachine is like Deployment while VirtualMachineInstance is like Pod


export kubeconfig file path to ~/.bashrc

1
2
export KUBECONFIG=<kubeconfig_filepath>

download and modify the manifests

1
2
3
4
5
6
7
8
9
# prefer ustc and azure mirrors of quay.io
export VERSION=$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases | grep tag_name | grep -v -- '-rc' | head -1 | awk -F': ' '{print $2}' | sed 's/,//' | xargs)
echo $VERSION
curl -OLk https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-operator.yaml
curl -OLk https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-cr.yaml
export VERSION=$(curl -s https://api.github.com/repos/kubevirt/containerized-data-importer/releases | grep tag_name | grep -v -- '-rc' | head -1 | awk -F': ' '{print $2}' | sed 's/,//' | xargs)
curl -LkO https://ghproxy.net/github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml
curl -LkO https://ghproxy.net/github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml

check if it works

1
2
3
kubectl get pods -n kubevirt
kubectl get vmi

install virtctl

1
2
3
4
5
# get the download address from: https://github.com/kubevirt/kubevirt/releases/
curl -LkO <download_address>
sudo mv <downloaded_binary> /usr/bin/virtctl
sudo chmod +x /usr/bin/virtctl

create vm (already with tun device inside)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
# init_vm.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: <vm_name>
spec:
runStrategy: Always
template:
spec:
terminationGracePeriodSeconds: 30
dnsConfig:
nameservers:
- 8.8.8.8
domain:
resources:
requests:
memory: 1024M
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: emptydisk
disk:
bus: virtio
- disk:
bus: virtio
name: cloudinitdisk
volumes:
- name: containerdisk
containerDisk:
image: kubevirt/fedora-cloud-container-disk-demo:latest
- name: emptydisk
emptyDisk:
capacity: "2Gi"
- name: cloudinitdisk
cloudInitNoCloud:
userData: |-
#cloud-config
password: fedora
chpasswd: { expire: False }

apply it to run the vm

1
2
3
4
kubectl apply -f init_vm.yaml
kubectl get vm
kubectl get vmi

to interact with the vm

1
2
3
4
5
6
7
virtctl console <vm_name>
# ssh config is stored at: ~/.ssh/kube_known_hosts
virtctl ssh <user>@<vm_name>
# run this under gui, with `vncviewer` in path
# install with `apt install tigervnc-viewer`
virtctl vnc <vm_name>

Read More

2024-07-21
Mastering Kubernetes Python Library: Installation, Configuration, And Pods Listing Examples

k8s python api library

install the official library with pip install kubernetes


the default api config path for k8s is at ~/.kube/config

k3s is at /etc/rancher/k3s/k3s.yaml

microk8s is at /var/snap/microk8s/current/credentials/kubelet.config

you can also set KUBECONFIG environment variable to let kubernetes python library know where the config is


to use it:

https://github.com/kubernetes-client/python/blob/master/kubernetes/README.md

1
2
3
4
5
6
7
8
9
10
from kubernetes import client, config
# Configs can be set in Configuration class directly or using helper utility
config_path = ...
config.load_kube_config(config_path)
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))

Read More

2024-07-21
K8S Security

Read More

2024-07-21
K8S Load Docker Image

first of all, you can build and upload docker image to registry.

1
2
3
4
docker login
docker build -t <username>/<imagename>:<tag> -f <dockerfile> <resource_path>
docker push <username>/<imagename>:<tag>

you can upload to docker.io or microk8s provided local registry.

https://microk8s.io/docs/registry-built-in

1
2
3
4
# for microk8s the registry address is localhost:32000
docker tag <imagename> <registry_addr>/<imagename>
docker push <registry_addr>/<imagename>

you can also build image with minikube:

1
2
minikube image build -t <imagename> -f <dockerfile_path> <resource_path>


load image exported with docker save <image>:<tag>

1
2
3
4
5
6
7
8
9
# ref: https://minikube.sigs.k8s.io/docs/commands/image/
# remember to set a tag to the image imported
# or set the imagePullPolicy to Never
# ref: https://iximiuz.com/en/posts/kubernetes-kind-load-docker-image/
minikube image load <image_filepath>/<docker_image_name>
microk8s images import <image_filepath>
microk8s ctr image import <image_filepath>
k3s ctr image import <image_filepath>

https://blog.scottlowe.org/2020/01/25/manually-loading-container-images-with-containerd/

https://docs.k3s.io/installation/registry-mirror#pushing-images


you can also configure k8s to use docker as container runtime instead.

https://github.com/canonical/microk8s/issues/287

https://docs.k3s.io/advanced#using-docker-as-the-container-runtime

Read More

2024-07-20
K8S Deny Intranet Access From All Containers

constrain pod resources:

https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/


to manually exceed the ephermal storage limit run:

1
2
fallocate -l 10G /bigfile

the pod will be evicted, volume and container will be purged, but the record is not automatically removed.

to cleanup the mess one may run a scheduled job like:

1
2
3
4
5
6
7
while true;
do
microk8s kubectl delete pods --field-selector=status.phase=Failed
microk8s kubectl delete pods --field-selector=status.phase=Unknown
sleep 60
done

or configure implementation dependent kube-controller-manager startup argument terminated-pod-gc-threshold=1.

for k3s edit /etc/rancher/k3s/config.yaml like:

1
2
3
kube-controller-manager-arg:
- 'terminated-pod-gc-threshold=1'

for microk8s, edit /var/snap/microk8s/current/args/kube-controller-manager

references:

https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/

https://docs.k3s.io/security/hardening-guide

https://github.com/k3s-io/k3s/issues/10448


make sure you have a networkpolicy enabled cni first. usually included but be careful with minikube since that is a different story

apply these configs with kubectl apply -f <config_path>

interact with kubectl exec <pod_name> -it -- /bin/sh

deployment config:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
labels:
app: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: alpine-container
image: alpine:3.7
command: ["tail", "-f", "/dev/null"]
resources:
limits:
ephemeral-storage: "4Gi"
dnsPolicy: None
dnsConfig:
nameservers:
- 8.8.8.8
terminationGracePeriodSeconds: 0

network policy config:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-intranet-egress
spec:
podSelector:
matchLabels:
app: hello-world
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: ::/0
except:
- fc00::/7
- fe80::/10
- ipBlock:
cidr: 0.0.0.0/0
except:
- 0.0.0.0/8
- 10.0.0.0/8
- 100.64.0.0/10
- 169.254.0.0/16
- 172.16.0.0/12
- 192.168.0.0/16

Read More

2024-07-19
Install Microk8S

network policy:

https://minikube.sigs.k8s.io/docs/handbook/network_policy/

https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-network-policy

persistent volume:

https://minikube.sigs.k8s.io/docs/handbook/persistent_volumes/


1
2
3
sudo snap install --classic microk8s
sudo microk8s enable dns:<dns_ip>

config files are at /var/snap/microk8s/current, and you need to replace all docker.io with some docker mirror to prevent init errors.

run microk8s inspect to get errors like hostname casing, and missing file like /var/snap/microk8s/current/var/kubernetes/backend/localnode.yaml

you need to configure multiple registries for docker.io and registry.k8s.io under /var/snap/microk8s/current/args/certs.d

in order to use some mirror site which does not support /v2 url, you have to add override_path = true in config

mirror sites:

https://github.com/docker-mirrors/website

https://github.com/cmliu/CF-Workers-docker.io/issues/8

https://github.com/kubesre/docker-registry-mirrors

https://github.com/lawrenceching/gitbook/blob/master/docker-repositories-in-china.md

reference:

https://github.com/containerd/containerd/blob/main/docs/hosts.md

https://microk8s.io/docs/registry-private


install k3s

1
2
3
4
curl -sfL https://get.k3s.io > k3s_setup.sh
# replace the line if GITHUB_URL to some github mirror instead
bash k3s_setup.sh

k3s mirror

1
2
curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn sh -​

registry config:

https://docs.k3s.io/installation/private-registry


k0s install:

https://docs.k0sproject.io/stable/install/

1
2
3
4
curl -sSLf https://get.k0s.sh | sudo sh
sudo k0s install controller --single
sudo k0s start

config:

https://docs.k0sproject.io/stable/runtime/

Read More

2024-07-08
Daemonizing Applications With Pm2: A Step-By-Step Guide

PM2 daemonize any app

remember to set unbuffered output flag -u before running any python script, otherwise there will be no output in pm2 log


install and setup pm2 as daemon:

1
2
3
4
5
sudo apt install npm
sudo npm config set registry https://registry.npmmirror.com
sudo npm install -g pm2
pm2 startup

run program as daemon:

1
2
3
4
5
6
7
pm2 start -n <process_name> <executable_name> -- <process_arguments>
# save all processes
pm2 save
pm2 monit
pm2 logs
pm2 ls

Read More

2024-07-05
Pixi (In Rust), Faster Conda, Mamba Alternavive

Read More

2024-07-04
Keep Docker Container Running

1
2
3
4
5
6
docker run -d <image_name> tail -f /dev/null
docker run -d <image_name> sleep infinity
docker run -dt <image_name>
docker run -dt <image_name> cat
docker run -d <image_name> nc -l -p <port>

Read More