load image exported with docker save <image>:<tag>
1 2 3 4 5 6 7 8 9
# ref: https://minikube.sigs.k8s.io/docs/commands/image/ # remember to set a tag to the image imported # or set the imagePullPolicy to Never # ref: https://iximiuz.com/en/posts/kubernetes-kind-load-docker-image/ minikube image load <image_filepath>/<docker_image_name> microk8s images import <image_filepath> microk8s ctr image import <image_filepath> k3s ctr image import <image_filepath>
--storage-opt is supported only for overlay over xfs with ‘pquota’ mount option.
change data-root to somewhere else in /etc/docker/daemon.json
edit /etc/fstab and add our xfs block on new line (find uuid using blkid)
1 2
docker run --storage-opt size=10M --rm -it alpine
when using devmapper make sure size is greater than 10G (default)
1 2
docker run --storage-opt size=11G --r'm -it alpine
zfs, vfs (not a unionfs, but for testing) storage drivers also supports disk quota. you may use it by changing data-root to the related storage device.
login mysql with empty password then execute command to make it remotely available:
1 2
mysql -uroot --password= -e "grant all privileges on *.* to root@'%' identified by '' with grant option; commit;"
create volume and attach volume to container, since containers will be reset after system restarts.
1 2 3 4
docker volume create <volume_name> docker run -it -d --rm -v <volume_name>:<container_mountpoint> --name <container_name> <image_name> docker volume inspect <volume_name> # get info on created volume
when using mindsdb, it sucks because having bad pypi mirrors.
set pip index url globally:
1 2
pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
or pass it as environment variable:
1 2
docker run -it -d -e PIP_INDEX_URL=https://pypi.tuna.tsinghua.edu.cn/simple -n <container_name> <image_name>
if you want to save container states into images, use docker commit <container_name> <image_name>[:image_tag]
Keep in mind that the docker commit command only saves the changes made to a container’s file system. It does not save any changes made to the container’s settings or network configurations. To save all changes made to a container, including settings and network configurations, you can use the docker export and docker import commands instead.
when exporting ports, if not specifying host ip, you cannot reach the service inside the container. do this instead: docker run -p 0.0.0.0:<host_port>:<container_port> <rest_commands>
it seems to be the proxy (fastgithub). disable http proxy so we can connect to container again, or use clash to make rules to let “localhost” or subnet requests passing through.
if you want to change ip routings or some other configurations passed when docker run, you need to change the file called hostconfig.json located in /var/lib/docker/containers/<container_id> with PortBindings sections. you stop the container first. find and change the config file then start it. tutorial
containers can only contact each other if they share the same network. better give unique ip for each container within same network. it can also use container name as host name instead of static ip. tutorial
create a network (not overlapping with anything shown in ifconfig, notice the subnet mask):