2022-12-05
Nctf Writeups

challenges

the platform

official released source code

buuctf online judge

you may find many writeups in blog and github for buuctf.

hints and tools

binwalk

arr3esty0u github info

shg-sec

hack.lu 2022

ayacms rce in nctf 2022? how to identify the cms? and how the fuck did those guys identify the shit from that damn website (bing-upms)?

answer: they are both busting common web directories. can be induced by common repo structures.

baby-aes for crypto signin?

zsteg for solving that png problem?

normal sql injection, not for denodb

huli: interesting blog where denodb 0day came from

some z3 code, which does not but angr solved the problem

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
from z3 import *
data1=0x162AEB99F80DD8EF8C82AFADBA2E087A
data2=0x47C9F2ACA92F6476BE7F0A6DC89F4305
data3=0x33B57575
answer=[]
flag1=[]
key=[0x7e,0x1f,0x19,0x75]
solver=Solver()
flag=[Int('flag%d'%i) for i in range(36)]
for i in range(16):
answer.append((data1>>8*i)&0xff)
for i in range(16):
answer.append((data2>>8*i)&0xff)
for i in range(4):
answer.append((data3>>8*i)&0xff)
print(answer)
for i in range(0,9):
v3=key[3]
v4=flag[4*i+3]
v5=key[0]
v6=flag[4*i]
v7=flag[4*i+1]
v8=key[1]
v9=flag[4*i+2]
v10=(v6 + v4) * (key[0] + v3)
v11=key[2]
v12 = v3 * (v6 + v7)
v13 = (v3 + v11) * (v7 - v4)
v14 = v4 * (v11 - v5)
v15 = v5 * (v9 + v4)
solver.add(v14+v10+v13-v12==answer[4*i])
solver.add(v6 * (v8 - v3) + v12==answer[4*i+1])
solver.add(v15 + v14==answer[4*i+2])
solver.add(v6 * (v8 - v3) + (v8 + v5) * (v9 - v6) + v10 - v15==answer[4*i+3])
if solver.check()==sat:
m=solver.model()
rex = []
for i in range(34):
rex.append(m[flag[i]].as_long())
print(rex)
else:
print("n0")

writeups

saying this is complete for 2022 nctf?

arr3ty0u nctf 2022 writeup

nctf 2019 writeup

don’t know when it is, but i remember i have seen this shit: katastros’s nctf writeup

ctfiot chamd5 nctf 2022 writeup

nctf 2022 official crypto writeup

Read More

2022-08-26
On Building The Lua Torch Library

im2latex-tensorflow sucks, looking for alternatives

training on gpu is intensive and will occasionally burn hardware if not careful, doing this on kaggle or modify the software to stop training when gpu goes hot, but we are using trainer here

harvard nlp showcase

for those doesn’t provide pretrained models:

im2latex in tensorflow, with makefile support, run on tensorflow v1 and python3

im2latex in pytorch, more recent. the dataset has relocated to here according to official website

install or run python2.7 to run im2latex-tensorflow

you may need to adapt our modified code to load the weights and test the result against our image.

it is reported the performance is poor. maybe it does not worth trying.

download tensorflow 0.12.0 for macos here

visit here to get all miniconda installers

to install on macos, download the installer here

some tutorial here about libmagic as bonus tips

1
2
3
4
5
CONDA_SUBDIR=osx-64 conda create -n py27 python=2.7  # include other packages here
# ensure that future package installs in this env stick to 'osx-64'
conda activate py27
conda config --env --set subdir osx-64

after that, do this to get pip on python2.7 (rosetta2)

1
2
3
curl https://bootstrap.pypa.io/pip/2.7/get-pip.py -o get-pip.py
python get-pip.py

install tensorflow version below 1, and doing this can be far more easier on linux. maybe we should do this in conda virtual enviorment to prevent conflicts.

we are doing this for the original lua implementation of im2markup

it works!

download libcudnn5 for torch

remember to activate torch enviorment by exporting the path to some shell script

difference between cudamalloc and cudamallocasync, and that’s some copying and pasting about some generalized template of memory manager function

qt4 uses CRLF so convert all text files using dos2unix

need to hack qt4 files to build qt4

hack luarocks to allow install from local spec file and download repo from github via https

hack some lua torch file to be compatible with cuda11

about c++ tweaks:

add ‘+’ to force type inference

force type conversion by using brackets

some macro to disable some blocks of code

Read More