Make desktop ubuntu headless
1 | sudo apt-get purge libx11.* libqt.* |
1 | sudo apt-get purge libx11.* libqt.* |
k3s has different ways to form a cluster than kubeadm join
.
https://docs.k3s.io/quick-start
k3s specifies the init command in /etc/systemd/system/k3s.service
. (k3s-agent.service
if installed as agent) usually it is k3s server
.
you need to change it to k3s agent
in order to join the master node, or pass additional environment variables K3S_URL=https://<node_ip>:6443
and K3S_TOKEN=<node-token>
while running k3s installation script.
the node token is at /var/lib/rancher/k3s/server/node-token
the agent node still needs to configure registry mirrors at /etc/rancher/k3s/registries.yaml
for successfully pulling images
k3sup can automatically install k3s cluster using ssh connection
multi server cluster setup:
do not install it on microk8s
install windows on kubevirt:
https://charlottemach.com/2020/11/03/windows-kubevirt-k3s.html
VirtualMachine
is like Deployment
while VirtualMachineInstance
is like Pod
export kubeconfig file path to ~/.bashrc
1 | export KUBECONFIG=<kubeconfig_filepath> |
download and modify the manifests
1 | # prefer ustc and azure mirrors of quay.io |
check if it works
1 | kubectl get pods -n kubevirt |
install virtctl
1 | # get the download address from: https://github.com/kubevirt/kubevirt/releases/ |
create vm (already with tun device inside)
1 | # init_vm.yaml |
apply it to run the vm
1 | kubectl apply -f init_vm.yaml |
to interact with the vm
1 | virtctl console <vm_name> |
remember to set unbuffered output flag -u
before running any python script, otherwise there will be no output in pm2 log
install and setup pm2 as daemon:
1 | sudo apt install npm |
run program as daemon:
1 | pm2 start -n <process_name> <executable_name> -- <process_arguments> |
It is not advised to do so with dual WiFi connections, which is a pain in the ass in daily usage (only one of them will be used at a time).
Ethernet and WiFi dual connections seem fine with firejail
but failed with dante
.
Use firejail
1 | sudo firejail --net=wlan0 --ip=dhcp --noprofile <program cmd> |
Use dante
and proxychains-ng
1 | sudo apt install dante-server proxychains-ng |
Now edit the dante
config file at /etc/dante.conf
:
1 | internal: eth0 port = 1080 |
Run the daemon by:
1 | danted |
Find the [ProxyList]
section and add the following line in /etc/proxychains.conf
:
1 | socks5 127.0.0.1 1080 root <root_password> |
Run the program with proxychains-ng:
1 | proxychains <program cmd> |
You can test your configuration like:
1 | curl -x socks5://root:root@127.0.0.1:1080 https://www.baidu.com |
If you run danted
like systemctl start danted
, you can configure a separate user for authentication. You have to change /etc/danted.conf
and /etc/proxychains.conf
accordingly.
1 | sudo useradd -r -s /bin/false your_dante_user |
Fcitx contains google-pinyin, which is the most reliable Chinese input method of all times.
You need to configure it after installation like described here.
If all methods failed, please consider use tools like systemd or “Session and Startup” to force running command fcitx
after startup.
training on gpu is intensive and will occasionally burn hardware if not careful, doing this on kaggle or modify the software to stop training when gpu goes hot, but we are using trainer here
for those doesn’t provide pretrained models:
im2latex in tensorflow, with makefile support, run on tensorflow v1 and python3
im2latex in pytorch, more recent. the dataset has relocated to here according to official website
you may need to adapt our modified code to load the weights and test the result against our image.
it is reported the performance is poor. maybe it does not worth trying.
download tensorflow 0.12.0 for macos here
visit here to get all miniconda installers
to install on macos, download the installer here
some tutorial here about libmagic as bonus tips
1 | CONDA_SUBDIR=osx-64 conda create -n py27 python=2.7 # include other packages here |
after that, do this to get pip on python2.7 (rosetta2)
1 | curl https://bootstrap.pypa.io/pip/2.7/get-pip.py -o get-pip.py |
install tensorflow version below 1, and doing this can be far more easier on linux. maybe we should do this in conda virtual enviorment to prevent conflicts.
it works!
download libcudnn5 for torch
remember to activate torch enviorment by exporting the path to some shell script
difference between cudamalloc and cudamallocasync, and that’s some copying and pasting about some generalized template of memory manager function
qt4 uses CRLF so convert all text files using dos2unix
need to hack qt4 files to build qt4
hack luarocks to allow install from local spec file and download repo from github via https
hack some lua torch file to be compatible with cuda11
about c++ tweaks:
add ‘+’ to force type inference
force type conversion by using brackets
some macro to disable some blocks of code
install hugo:
https://gohugo.io/getting-started/installing/
install hugo from snap:
snap install hugo –channel=extended
install on debian/ubuntu:
apt-get install hugo
hugo themes:
how to create hugo blog: