by the most you would create a tun device, then route all traffic to the device after you have installed necessary softwares.
1 2 3 4 5 6
ip tuntap add mode tun dev tun0 ip addr add 198.18.0.1/15 dev tun0 ip linkset dev tun0 up # download tun2socks # https://github.com/xjasonlyu/tun2socks
you need to use the default gateway otherwise you will not be able to reach the dns.
the default gateway will differ from nodes. it is recommended to fetch it from ip route
for ubuntu containers you would run apt install iproute2 first
1 2 3 4 5 6 7 8 9 10
DNS_IP=8.8.8.8 GATEWAY_IP=$(ip route | grep default | grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b") DEFAULT_ROUTE=$(ip route | grep default | sed -E 's/( metric.*)?$/ metric 100/g;') DEFAULT_NET_DEVICE=$(ip route | grep default | grep -oP '(?<=dev )\w+') # run with root privilege ip route delete default ip route add $DEFAULT_ROUTE ip route add default dev via 198.18.0.1 tun0 ip route add $DNS_IP via $GATEWAY_IP dev $DEFAULT_NET_DEVICE
you need to configure the dnsPolicy to None in manifest otherwise it would add the cluster dns ip to /etc/resolv.conf, slowing down the lookup process and making it very hard to change during the runtime.
not every domain shall be reached via public proxies, unless it is graranteed to not be discovered by the gfw.
the full network isolation scheme can be shown below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14
internet interface | intranet-isolated proxy server (hysteria) | cluster service with clusterIP | isolated socks5 server with only access to the hysteria service clusterIP and port | local socks5 service with clusterIP | isolated tun-routed pod with only access to socks5 service clusterIP and port
kubectl apply -f init_vm.yaml kubectl get vm kubectl get vmi
to interact with the vm
1 2 3 4 5 6 7
virtctl console <vm_name> # ssh config is stored at: ~/.ssh/kube_known_hosts virtctl ssh <user>@<vm_name> # run this under gui, with `vncviewer` in path # install with `apt install tigervnc-viewer` virtctl vnc <vm_name>
from kubernetes import client, config # Configs can be set in Configuration class directly or using helper utility config_path = ... config.load_kube_config(config_path) v1 = client.CoreV1Api() print("Listing pods with their IPs:") ret = v1.list_pod_for_all_namespaces(watch=False) for i in ret.items: print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
load image exported with docker save <image>:<tag>
1 2 3 4 5 6 7 8 9
# ref: https://minikube.sigs.k8s.io/docs/commands/image/ # remember to set a tag to the image imported # or set the imagePullPolicy to Never # ref: https://iximiuz.com/en/posts/kubernetes-kind-load-docker-image/ minikube image load <image_filepath>/<docker_image_name> microk8s images import <image_filepath> microk8s ctr image import <image_filepath> k3s ctr image import <image_filepath>