Blog of James Brown
2022-12-13
2022-12-13
they are both remote procedure calls.
you can ship things across internet, but remember it can’t be some “native” structure like numpy array.
python xmlrpc tutorial and xmlrpc.server mentions c2 wiki which you have scraped before. where is it, along with all other things you have scraped? believe it is in that AGI directory since you are such an archivist at that time.
tutorials on how to detect if dubbo services are working normally:
basically they use native java methods or telnet protocol with python
http://www.shouhuola.com/q-23671.html
https://www.yisu.com/zixun/576879.html
https://www.cnblogs.com/leozhanggg/p/14176752.html
https://www.bilibili.com/read/cv13670275/
http://www.zztongyun.com/article/article-1-1.html (open with elinks
to prevent ads)
2022-12-13
he recently interacts with racketeers on wechat, find how to add new friends (and groups if any) on wechat.
the bilibili user and his repo
video transfer based on DCT-Net 视频洗稿 伪原创
AntiFraudChatBot is a wechaty bot using a super large model based on megatron called Yuan 1.0 which is only freely avaliable within three month (30k api calls) when applied to chat with racketeers, another application: AI剧本杀
megatron deepspeed enables training large model on cheap hardware
essaykillerbrain is another project he has involved in, which contains EssayKiller_V2 EssayKiller_V1 EssayTopicPredict WrittenBrainBase
2022-12-13
attach to virtual phone number providers, login bilibili or other websites with that phone number, register cookies, then monetize that account like posting ads
you can also use bugmenot
2022-12-12
what to do when chatgpt is not for everyone?
general introduction
this is about information gathering, so you might learn how to scrape AI models, AI notebooks, tutorials, code snippets from websites/search engines/social media as well.
given the name of the hack tool, you may not be able to tell what the tool is (written in python? hosted on github? online tool?) and you want to use search engine to find possible entries. you may take snapshots on these pages and index them.
if the hack tool is linked to some website/manual, you can index the website. if you find it inside some package index or package manager, you will know how to install the package.
you may miss the wiki, forum, tutorials. you know where to get them.
here are few sources where you can learn things from:
darknet.org.uk where you learn hacking and hack tools
you also have brew
sdkman
macport
pkgsrc
chocolatey
scoop
winget
snap
portage
conda
flatpak
rpm
urpm
yum
cargo
dnf
indexes and more to scrape. maybe it is time to improve your searching skill? (select few web domains you want to learn things from, then perform the query, still you have to deal with keywords generation and site selection)
cyborg hawx linux, backtrack linux may join the parade.
for language specific package indexs, we have hackage
CPAN
CRAN
crates.io
and more (where are package indexes for C
C++
Pascal
BASIC
assembly
lisp
prolog
lua
and more? visit awesomeopensource then use combined topics to find package managers for c and more. you also have libraries.io to monitor libraries across all package managers). just check tuna mirror site and get a view on that. you may want a network directory traversal tool akin to find
in local filesystem, without downloading anything “binary” but just logging all possible urls for you to inspect. (with file size)
after all these information collecting, you must categorize them (topic modelling), retrieve info when needed (semantic searching? recommendation? dialog based GPT?). you may find many things not obviously a hack tool but in general fit well into specific needs.
with all packages being scraped, you need to deduplicate it a little bit, either by name or homepage.
case specific
github
repo named with “awesome” means this is a collection of handpicked resources
you will find github links on web, social media, instant messaging and forums
Scraper scrape popular github repositories every day
gitsuggest recommend github repos
How to Use the GitHub API to List Repositories
PyGithub use python to automate github api v3
if you want all README pages on github, you first need to collect all github repo urls. you may also collect info on github repos (OSINT). you can retrieve all repos link to given user with github api (quota limited). you can search github on github or search engine with some juicy/promising keywords then collect repo name, username, keywords, repeat the search.
there are few github repo archives avaliable for download. the github archive program packed many repos to arctic, the list is called Greatest Hits
gharchive provides many websites for monitoring github repos, though stopped archiving since 2016
kali, parrot
kali tool list pages
1 | curl https://en.kali.tools/all/ > kali_tools_all.html # more tags, more categories, the same as blackarch? |
kali meta page on package index
notice kali official “offsec” provides training courses and materials as apt packages. the list:
1 | offsec-awae/kali-rolling 2021.1.2 amd64 |
these two OSes are for pentesting, using apt
as package manager. but parrot does not provide tool introductions.
get all package names:
1 | apt list |
you can retrieve package information in apt command, like:
1 | apt show <package_name> |
you will get homepage link and package description
if you want package dependencies you will also have it.
using apt
one can retrieve package infos with simple command. find main metapackages like parrot-tools-full
(parrot) and kali-linux-everything
(kali) first, then retrieve dependency trees.
parrotos has index.db which you can retrieve info from there, or “Packages” for general debian package index, or anything you think is metadata.
chocolatey
note: deprecated since v2.0, can only be used to list local packages
1 | choco list |
blackarch
blackarch is based on archlinux, which has both official repo and user provided packages repo (AUR). the syntax is almost the same for pacman
and yaourt
to retrieve all available info of packages.
maybe you want to retrieve package information with pacman.
list all package information just like apt, description, dependencies, homepage and more.
1 | pacman -Si |
use some parser?
for aur repos, use yay or yaourt.
1 | yaourt -Si |
you may use dependencies to deduce relationship between packages, use description, man pages, wiki, manual and tutorials to understand the usage of packages.
download main blackarch tool list:
1 | curl https://www.blackarch.org/tools.html > tools.html |
alpine
alpine linux is able to download man page alone without installing package
1 | apk list -I | sed -rn '/-doc/! s/([a-z-]+[a-z]).*/\1/p' | awk '{ print system("apk info \""$1"-doc\" > /dev/null") == 0 ? $ "-doc" : "" }' | xargs apk add |
pypi/pip
i remember you have scraped tsinghua pypi index, containing many python tools.
retrieve python package info as json:
https://pypi.org/pypi/<package-name>/json
visit pypi simple index to get all package names. but the info is clearly on the other page. you retrieve this from pypi. use the below commandline tool?
pypi [information|description] <package_name>
documentation url is provided separately from mainpage.
commandline tool for searching in pypi
install it, then run:
1 | pypi search <query> |
it also provides “read-the-docs” to search in documentation of a package, detailed info
nuget
we can search in cli tool (not dotnet nuget
(installed with dotnet sdk) but nuget
(installation guide)) and web interface.
list all packages:
1 | nuget list |
get package information:
1 | nuget list <packageName> -Verbosity detailed |
query all package information without nuget
the web interface seems allowing us to do some traversal on the parameter: https://www.nuget.org/packages?page=<pagenum>&sortBy=relevance
keep in mind the pagenum cannot be too big (like 2000).
maven
there are tools for interacting with maven search api.
you can retrieve “pom.xml” to get package info like homepage and description.
maven central has archtype-catalog for retrieving all avaliable artifact names
maven search tools:
homebrew
reading the source code and according to brew api docs i found this url is for retrieving all formula info on brew index, and this for casks.
also run this command for showing local cached package infos:
1 | brew info --json --all |
npm
there’s a repo storing up-to-date package names on github. after that, use npm-description to download description for every package.
all-the-package-repos contains repo information (github, gitlab) of every npm package
gem
change gem sources first:
1 | gem sources --add https://gems.ruby-china.com/ --remove https://rubygems.org/ |
gem list -r
really works. you just have to wait.
gem info -r
list all remote gem infos, but too slow and not working, use only one package at a time.
manpages
you can download man pages before installing package
use “dman” by bikeshed (not avaliable on kali, maybe on ubuntu?)
1 | apt-get install bikeshed |
or browse manpages on web, tutorials on linux and languages
man pages with different sections (categories)
hierachical manpages of ubuntu
location of locally installed man pages: /usr/share/man
vscode plugin
web interface for searching
2022-12-11
you talk about planting virus in software when people need it, also sharing some encrypted porn/copyrighted material to users, or mining crypto through github ci, exploits. when doing so, make sure the money is laundried, either through transfer between multiple crypto accounts or some professional agencies.
for github jobs, you may use terraform since it has some github automation tools
2022-12-11
there are some code in his repo about scraping proxies. running this on kaggle or github ci providers.
on github search for “clash” “ssr” “proxy” “shadowsocks” “v2ray” “fanqiang” or some keywords obvious to you then sort by “recently updated” since available proxy pool will be constantly updated.
or you could use lantern.
sources
proxyscan.io
referred by Proxy-List
telegram bots
self-hosted or cloud/CI based
proxylist by fate0 (ceased contribution for long) has stopped working since broken github action settings, though getproxy still works (collecting unchecked proxies via travis-ci). i think clash can handle that automatically though.
tools
controllers
clash-ctl control clash from commandline
clashctl clash controller in rust, with tui and commandline interface
link converters
机场订阅转换器-V2ray,Clash,SSR,SS等订阅链接在线转换
subconverter self-hosted utility to convert between various subscription format
scrapers
routing rules
providers
subscription links
proxy-list 20000+
ShadowsocksAggregator use its Eternity.yml
in clash. it also has many sources avaliable for check in README.md
Proxy update hourly
free-servers v2ray subscription
clients
2022-12-11
use slim toolkit to shrink docker image size
with iptable, you can constrain docker container network
1 | sudo iptables -I DOCKER-USER -d <ip_range> -j DROP |
it does not work if you block all local ip ranges.
to use host provided proxy servers, one can set environment variables before running containers.
1 | docker run -e http_proxy=<proxy_addr> -e https_proxy=<proxy_addr> -e all_proxy=<proxy_addr> -e no_proxy=<bypass_addrs> |
or better, use tun2proxy
(linux only)
run server:
1 | docker run -d -v /dev/net/tun:/dev/net/tun --sysctl net.ipv6.conf.default.disable_ipv6=0 --cap-add NET_ADMIN --name tun2proxy tun2proxy --proxy <proto>://[username[:password]@]host:port |
container forced to use proxy:
1 | docker run -it --network "container:tun2proxy" <image_name>[:tag] |
with docker for mac, you can use the following domain name to get host and gateway ip:
host.docker.internal
gateway.docker.internal
for podman:
host.containers.internal
gateway.containers.internal
latest docker mirror:
https://zhuanlan.zhihu.com/p/704011584
login mysql with empty password then execute command to make it remotely available:
1 | mysql -uroot --password= -e "grant all privileges on *.* to root@'%' identified by '' with grant option; commit;" |
create volume and attach volume to container, since containers will be reset after system restarts.
1 | docker volume create <volume_name> |
when using mindsdb, it sucks because having bad pypi mirrors.
set pip index url globally:
1 | pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple |
or pass it as environment variable:
1 | docker run -it -d -e PIP_INDEX_URL=https://pypi.tuna.tsinghua.edu.cn/simple -n <container_name> <image_name> |
if you want to save container states into images, use docker commit <container_name> <image_name>[:image_tag]
Keep in mind that the docker commit
command only saves the changes made to a container’s file system. It does not save any changes made to the container’s settings or network configurations. To save all changes made to a container, including settings and network configurations, you can use the docker export
and docker import
commands instead.
when exporting ports, if not specifying host ip, you cannot reach the service inside the container. do this instead: docker run -p 0.0.0.0:<host_port>:<container_port> <rest_commands>
it seems to be the proxy (fastgithub). disable http proxy so we can connect to container again, or use clash to make rules to let “localhost” or subnet requests passing through.
if you want to change ip routings or some other configurations passed when docker run
, you need to change the file called hostconfig.json
located in /var/lib/docker/containers/<container_id>
with PortBindings
sections. you stop the container first. find and change the config file then start it. tutorial
seems not working. fuck.
1 | "PortBindings": { |
containers can only contact each other if they share the same network. better give unique ip for each container within same network. it can also use container name as host name instead of static ip. tutorial
create a network (not overlapping with anything shown in ifconfig
, notice the subnet mask):
1 | docker network create --subnet=172.18.0.0/16 <network_name> |
start container with given network (again not overlapping with addresses in ifconfig
, not the starting address):
1 | docker run --rm -d -it --net <network_name> --ip <ipaddress> --name <container_name> |
to check what ip the container is at:
1 | docker inspect <container_id/container_name> | grep IPAddress |
now you might can talk to the container without port mappings.
2022-12-10
machine learning guide lots of links, broad topics
This repository contains a hand-curated of great machine (deep) learning resources for Natural Language Processing (NLP) with a focus on Bidirectional Encoder Representations from Transformers (BERT), attention mechanism, Transformer architectures/networks, and transfer learning in NLP.
Transformer (BERT) (Source)
Table of Contents
Expand Table of Contents
Papers
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context by Zihang Dai, Zhilin Yang, Yiming Yang, William W. Cohen, Jaime Carbonell, Quoc V. Le and Ruslan Salakhutdinov.
- Uses smart caching to improve the learning of long-term dependency in Transformer. Key results: state-of-art on 5 language modeling benchmarks, including ppl of 21.8 on One Billion Word (LM1B) and 0.99 on enwiki8. The authors claim that the method is more flexible, faster during evaluation (1874 times speedup), generalizes well on small datasets, and is effective at modeling short and long sequences.
Conditional BERT Contextual Augmentation by Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han and Songlin Hu.
SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering by Chenguang Zhu, Michael Zeng and Xuedong Huang.
Language Models are Unsupervised Multitask Learners by Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever.
The Evolved Transformer by David R. So, Chen Liang and Quoc V. Le.
- They used architecture search to improve Transformer architecture. Key is to use evolution and seed initial population with Transformer itself. The architecture is better and more efficient, especially for small size models.
- XLNet: Generalized Autoregressive Pretraining for Language Understanding by Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
A new pretraining method for NLP that significantly improves upon BERT on 20 tasks (e.g., SQuAD, GLUE, RACE).
“Transformer-XL is a shifted model (each hyper-column ends with next token) while XLNet is a direct model (each hyper-column ends with contextual representation of same token).” — Thomas Wolf.
A clever dual masking-and-caching algorithm.
This is NOT “just throwing more compute” at the problem.
The authors have devised a clever dual-masking-plus-caching mechanism to induce an attention-based model to learn to predict tokens from all possible permutations of the factorization order of all other tokens in the same input sequence.
In expectation, the model learns to gather information from all positions on both sides of each token in order to predict the token.
For example, if the input sequence has four tokens, [“The”, “cat”, “is”, “furry”], in one training step the model will try to predict “is” after seeing “The”, then “cat”, then “furry”.
In another training step, the model might see “furry” first, then “The”, then “cat”.
Note that the original sequence order is always retained, e.g., the model always knows that “furry” is the fourth token.
The masking-and-caching algorithm that accomplishes this does not seem trivial to me.
The improvements to SOTA performance in a range of tasks are significant – see tables 2, 3, 4, 5, and 6 in the paper.
CTRL: Conditional Transformer Language Model for Controllable Generation by Nitish Shirish Keskar, Richard Socher et al. [Code].
PLMpapers - BERT (Transformer, transfer learning) has catalyzed research in pretrained language models (PLMs) and has sparked many extensions. This repo contains a list of papers on PLMs.
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Google Brain.
- The group perform a systematic study of transfer learning for NLP using a unified Text-to-Text Transfer Transformer (T5) model and push the limits to achieve SoTA on SuperGLUE (approaching human baseline), SQuAD, and CNN/DM benchmark. [Code].
- Reformer: The Efficient Transformer by Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya.
- “They present techniques to reduce the time and memory complexity of Transformer, allowing batches of very long sequences (64K) to fit on one GPU. Should pave way for Transformer to be really impactful beyond NLP domain.” — @hardmaru
Supervised Multimodal Bitransformers for Classifying Images and Text (MMBT) by Facebook AI.
A Primer in BERTology: What we know about how BERT works by Anna Rogers et al.
- “Have you been drowning in BERT papers?”. The group survey over 40 papers on BERT’s linguistic knowledge, architecture tweaks, compression, multilinguality, and so on.
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity by Google Brain. [Code] | [Blog post (unofficial)]
- Key idea: the architecture use a subset of parameters on every training step and on each example. Upside: model train much faster. Downside: super large model that won’t fit in a lot of environments.
An Attention Free Transformer by Apple.
A Survey of Transformers by Tianyang Lin et al.
Codex, a GPT language model that powers GitHub Copilot.
They investigate their model limitations (and strengths).
They discuss the potential broader impacts of deploying powerful code generation techs, covering safety, security, and economics.
Training language models to follow instructions with human feedback by OpenAI. They call the resulting models InstructGPT. ChatGPT is a sibling model to InstructGPT.
Training Compute-Optimal Large Language Models by Hoffmann et al. at DeepMind. TLDR: introduces a new 70B LM called “Chinchilla” that outperforms much bigger LMs (GPT-3, Gopher). DeepMind has found the secret to cheaply scale large language models — to be compute-optimal, model size and training data must be scaled equally. It shows that most LLMs are severely starved of data and under-trained. Given the new scaling law, even if you pump a quadrillion parameters into a model (GPT-4 urban myth), the gains will not compensate for 4x more training tokens.
Articles
BERT and Transformer
Open Sourcing BERT: State-of-the-Art Pre-training for Natural Language Processing from Google AI.
The Illustrated BERT, ELMo, and co. (How NLP Cracked Transfer Learning).
Dissecting BERT by Miguel Romero and Francisco Ingham - Understand BERT in depth with an intuitive, straightforward explanation of the relevant concepts.
Generalized Language Models by Lilian Weng, Research Scientist at OpenAI.
- Permutation Language Modeling objective is the core of XLNet.
DistilBERT (from HuggingFace), released together with the blog post Smaller, faster, cheaper, lighter: Introducing DistilBERT, a distilled version of BERT.
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations paper from Google Research and Toyota Technological Institute. — Improvements for more efficient parameter usage: factorized embedding parameterization, cross-layer parameter sharing, and Sentence Order Prediction (SOP) loss to model inter-sentence coherence. [Blog post | Code]
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators by Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning - A BERT variant like ALBERT and cost less to train. They trained a model that outperforms GPT by using only one GPU; match the performance of RoBERTa by using 1/4 computation. It uses a new pre-training approach, called replaced token detection (RTD), that trains a bidirectional model while learning from all input positions. [Blog post | Code]
Attention Concept
The Annotated Transformer by Harvard NLP Group - Further reading to understand the “Attention is all you need” paper.
Attention? Attention! - Attention guide by Lilian Weng from OpenAI.
Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention) by Jay Alammar, an Instructor from Udacity ML Engineer Nanodegree.
Making Transformer networks simpler and more efficient - FAIR released an all-attention layer to simplify the Transformer model and an adaptive attention span method to make it more efficient (reduce computation time and memory footprint).
What Does BERT Look At? An Analysis of BERT’s Attention paper by Stanford NLP Group.
Transformer Architecture
The Illustrated Transformer by Jay Alammar, an Instructor from Udacity ML Engineer Nanodegree.
Watch Łukasz Kaiser’s talk walking through the model and its details.
Transformer-XL: Unleashing the Potential of Attention Models by Google Brain.
Generative Modeling with Sparse Transformers by OpenAI - an algorithmic improvement of the attention mechanism to extract patterns from sequences 30x longer than possible previously.
Stabilizing Transformers for Reinforcement Learning paper by DeepMind and CMU - they propose architectural modifications to the original Transformer and XL variant by moving layer-norm and adding gating creates Gated Transformer-XL (GTrXL). It substantially improve the stability and learning speed (integrating experience through time) in RL.
The Transformer Family by Lilian Weng - since the paper “Attention Is All You Need”, many new things have happened to improve the Transformer model. This post is about that.
DETR (DEtection TRansformer): End-to-End Object Detection with Transformers by FAIR - :fire: Computer vision has not yet been swept up by the Transformer revolution. DETR completely changes the architecture compared with previous object detection systems. (PyTorch Code and pretrained models). “A solid swing at (non-autoregressive) end-to-end detection. Anchor boxes + Non-Max Suppression (NMS) is a mess. I was hoping detection would go end-to-end back in ~2013)” — Andrej Karpathy
Transformers for software engineers - This post will be helpful to software engineers who are interested in learning ML models, especially anyone interested in Transformer interpretability. The post walk through a (mostly) complete implementation of a GPT-style Transformer, but the goal will not be running code; instead, they use the language of software engineering and programming to explain how these models work and articulate some of the perspectives they bring to them when doing interpretability work.
Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance - PaLM is a dense decoder-only Transformer model trained with the Pathways system, which enabled Google to efficiently train a single model across multiple TPU v4 Pods. The example explaining a joke is remarkable. This shows that it can generate explicit explanations for scenarios that require a complex combination of multi-step logical inference, world knowledge, and deep language understanding.
Generative Pre-Training Transformer (GPT)
Improving Language Understanding with Unsupervised Learning - this is an overview of the original OpenAI GPT model.
🦄 How to build a State-of-the-Art Conversational AI with Transfer Learning by Hugging Face.
The Illustrated GPT-2 (Visualizing Transformer Language Models) by Jay Alammar.
MegatronLM: Training Billion+ Parameter Language Models Using GPU Model Parallelism by NVIDIA ADLR.
OpenGPT-2: We Replicated GPT-2 Because You Can Too - the authors trained a 1.5 billion parameter GPT-2 model on a similar sized text dataset and they reported results that can be compared with the original model.
MSBuild demo of an OpenAI generative text model generating Python code [video] - The model that was trained on GitHub OSS repos. The model uses English-language code comments or simply function signatures to generate entire Python functions. Cool!
GPT-3: Language Models are Few-Shot Learners (paper) by Tom B. Brown (OpenAI) et al. - “We train GPT-3, an autoregressive language model with 175 billion parameters :scream:, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting.”
elyase/awesome-gpt3 - A collection of demos and articles about the OpenAI GPT-3 API.
How GPT3 Works - Visualizations and Animations by Jay Alammar.
GPT-Neo - Replicate a GPT-3 sized model and open source it for free. GPT-Neo is “an implementation of model parallel GPT2 & GPT3-like models, with the ability to scale up to full GPT3 sizes (and possibly more!), using the mesh-tensorflow library.” [Code].
GitHub Copilot, powered by OpenAI Codex - Codex is a descendant of GPT-3. Codex translates natural language into code.
GPT-4 Rumors From Silicon Valley - GPT-4 is almost ready. GPT-4 would be multimodal, accepting text, audio, image, and possibly video inputs. Release window: Dec - Feb. #hype
New GPT-3 model: text-Davinci-003 - Improvements:
Handle more complex intents — you can get even more creative with how you make use of its capabilities now.
Higher quality writing — clearer, more engaging, and more compelling content.
Better at longer form content generation.
- ChatGPT blog post and link to the conversational interface.
ChatGPT is OpenAI’s newest language model fine-tuned from a model in the GPT-3.5 series (which finished training in early 2022), optimized for dialogue. It is trained using Reinforcement Learning from Human Feedback; human AI trainers provide supervised fine-tuning by playing both sides of the conversation.
Is it evidently better than GPT-3 at following user instructions and context? People have noticed, ChatGPT’s output quality seems to represent a notable improvement over previous GPT-3 models.
Large Language Model (LLM)
GPT-J-6B - Can’t access GPT-3? Here’s GPT-J — its open-source cousin.
Fun and Dystopia With AI-Based Code Generation Using GPT-J-6B - Prior to GitHub Copilot tech preview launch, Max Woolf, a data scientist tested GPT-J-6B’s code “writing” abilities.
GPT-Code-Clippy (GPT-CC) - An open source version of GitHub Copilot. The GPT-CC models are fine-tuned versions of GPT-2 and GPT-Neo.
GPT-NeoX-20B - A 20 billion parameter model trained using EleutherAI’s GPT-NeoX framework. They expect it to perform well on many tasks. You can try out the model on GooseAI playground.
Metaseq - A codebase for working with Open Pre-trained Transformers (OPT).
YaLM 100B by Yandex is a GPT-like pretrained language model with 100B parameters for generating and processing text. It can be used freely by developers and researchers from all over the world.
BigScience’s BLOOM-176B from the Hugging Face repository [paper, blog post] - BLOOM is a 175-billion parameter model for language processing, able to generate text much like GPT-3 and OPT-175B. It was developed to be multilingual, being deliberately trained on datasets containing 46 natural languages and 13 programming languages.
bitsandbytes-Int8 inference for Hugging Face models - You can run BLOOM-176B/OPT-175B easily on a single machine, without performance degradation. If true, this could be a game changer in enabling people outside of big tech companies being able to use these LLMs.
Additional Reading
How to Build OpenAI’s GPT-2: “The AI That’s Too Dangerous to Release”.
How the Transformers broke NLP leaderboards by Anna Rogers. :fire::fire::fire:
A well put summary post on problems with large models that dominate NLP these days.
Larger models + more data = progress in Machine Learning research :question:
Transformers From Scratch tutorial by Peter Bloem.
Real-time Natural Language Understanding with BERT using NVIDIA TensorRT on Google Cloud T4 GPUs achieves 2.2 ms latency for inference. Optimizations are open source on GitHub.
NLP’s Clever Hans Moment has Arrived by The Gradient.
Language, trees, and geometry in neural networks - a series of expository notes accompanying the paper, “Visualizing and Measuring the Geometry of BERT” by Google’s People + AI Research (PAIR) team.
Benchmarking Transformers: PyTorch and TensorFlow by Hugging Face - a comparison of inference time (on CPU and GPU) and memory usage for a wide range of transformer architectures.
Evolution of representations in the Transformer - An accessible article that presents the insights of their EMNLP 2019 paper. They look at how the representations of individual tokens in Transformers trained with different objectives change.
The dark secrets of BERT - This post probes fine-tuned BERT models for linguistic knowledge. In particular, the authors analyse how many self-attention patterns with some linguistic interpretation are actually used to solve downstream tasks. TL;DR: They are unable to find evidence that linguistically interpretable self-attention maps are crucial for downstream performance.
A Visual Guide to Using BERT for the First Time - Tutorial on using BERT in practice, such as for sentiment analysis on movie reviews by Jay Alammar.
Turing-NLG: A 17-billion-parameter language model by Microsoft that outperforms the state of the art on many downstream NLP tasks. This work would not be possible without breakthroughs produced by the DeepSpeed library (compatible with PyTorch) and ZeRO optimizer, which can be explored more in this accompanying blog post.
MUM (Multitask Unified Model): A new AI milestone for understanding information by Google.
Based on transformer architecture but more powerful.
Multitask means: supports text and images, knowledge transfer between 75 languages, understand context and go deeper in a topic, and generate content.
GPT-3 is No Longer the Only Game in Town - GPT-3 was by far the largest AI model of its kind last year (2020). Now? Not so much.
OpenAI’s API Now Available with No Waitlist - GPT-3 access without the wait. However, apps must be approved before going live. This release also allow them to review applications, monitor for misuse, and better understand the effects of this tech.
The Inherent Limitations of GPT-3 - One thing missing from the article if you’ve read Gwern’s GPT-3 Creative Fiction article before is the mystery known as “Repetition/Divergence Sampling”:
when you generate free-form completions, they have a tendency to eventually fall into repetitive loops of gibberish.
For those using Copilot, you should have experienced this wierdness where it generates the same line or block of code over and over again.
Language Modelling at Scale: Gopher, Ethical considerations, and Retrieval by DeepMind - The paper present an analysis of Transformer-based language model performance across a wide range of model scales — from models with tens of millions of parameters up to a 280 billion parameter model called Gopher.
Competitive programming with AlphaCode by DeepMind - AlphaCode uses transformer-based language models to generate code that can create novel solutions to programming problems which require an understanding of algorithms.
Building games and apps entirely through natural language using OpenAI’s code-davinci model - The author built several small games and apps without touching a single line of code, simply by telling the model what they want.
Open AI gets GPT-3 to work by hiring an army of humans to fix GPT’s bad answers
GPT-3 can run code - You provide an input text and a command and GPT-3 will transform them into an expected output. It works well for tasks like changing coding style, translating between programming languages, refactoring, and adding doc. For example, converts JSON into YAML, translates Python code to JavaScript, improve the runtime complexity of the function.
Using GPT-3 to explain how code works by Simon Willison.
Character AI announces they’re building a full stack AGI company so you could create your own AI to help you with anything, using conversational AI research. The co-founders Noam Shazeer (co-invented Transformers, scaled them to supercomputers for the first time, and pioneered large-scale pretraining) and Daniel de Freitas (led the development of LaMDA), all of which are foundational to recent AI progress.
How Much Better is OpenAI’s Newest GPT-3 Model? - In addition to ChatGPT, OpenAI releases text-davinci-003, a Reinforcement Learning-tuned model that performs better long-form writing. Example, it can explain code in the style of Eminem. 😀
Educational
- minGPT by Andrej Karpathy - A PyTorch re-implementation of GPT, both training and inference. minGPT tries to be small, clean, interpretable and educational, as most of the currently available GPT model implementations can a bit sprawling. GPT is not a complicated model and this implementation is appropriately about 300 lines of code.
Tutorials
- How to train a new language model from scratch using Transformers and Tokenizers tutorial by Hugging Face. :fire:
Videos
BERTology
- XLNet Explained by NLP Breakfasts.
- Clear explanation. Also covers the two-stream self-attention idea.
- The Future of NLP by 🤗
- Dense overview of what is going on in transfer learning in NLP currently, limits, and future directions.
- The Transformer neural network architecture explained by AI Coffee Break with Letitia Parcalabescu.
- High-level explanation, best suited when unfamiliar with Transformers.
Attention and Transformer Networks
- Sequence to Sequence Learning Animated (Inside Transformer Neural Networks and Attention Mechanisms) by learningcurve.
Official Implementations
- google-research/bert - TensorFlow code and pre-trained models for BERT.
Other Implementations
PyTorch and TensorFlow
🤗 Hugging Face Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides state-of-the-art general-purpose architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet, CTRL…) for Natural Language Understanding (NLU) and Natural Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between TensorFlow 2.0 and PyTorch. [Paper]
spacy-transformers - a library that wrap Hugging Face’s Transformers, in order to extract features to power NLP pipelines. It also calculates an alignment so the Transformer features can be related back to actual words instead of just wordpieces.
PyTorch
codertimo/BERT-pytorch - Google AI 2018 BERT pytorch implementation.
innodatalabs/tbert - PyTorch port of BERT ML model.
kimiyoung/transformer-xl - Code repository associated with the Transformer-XL paper.
dreamgonfly/BERT-pytorch - A PyTorch implementation of BERT in “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”.
dhlee347/pytorchic-bert - A Pytorch implementation of Google BERT.
pingpong-ai/xlnet-pytorch - A Pytorch implementation of Google Brain XLNet.
facebook/fairseq - RoBERTa: A Robustly Optimized BERT Pretraining Approach by Facebook AI Research. SoTA results on GLUE, SQuAD and RACE.
NVIDIA/Megatron-LM - Ongoing research training transformer language models at scale, including: BERT.
deepset-ai/FARM - Simple & flexible transfer learning for the industry.
NervanaSystems/nlp-architect - NLP Architect by Intel AI. Among other libraries, it provides a quantized version of Transformer models and efficient training method.
kaushaltrivedi/fast-bert - Super easy library for BERT based NLP models. Built based on 🤗 Transformers and is inspired by fast.ai.
NVIDIA/NeMo - Neural Modules is a toolkit for conversational AI by NVIDIA. They are trying to improve speech recognition with BERT post-processing.
facebook/MMBT from Facebook AI - Multimodal transformers model that can accept a transformer model and a computer vision model for classifying image and text.
dbiir/UER-py from Tencent and RUC - Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo (with more focus on Chinese).
Keras
Separius/BERT-keras - Keras implementation of BERT with pre-trained weights.
CyberZHG/keras-bert - Implementation of BERT that could load official pre-trained models for feature extraction and prediction.
bojone/bert4keras - Light reimplement of BERT for Keras.
TensorFlow
guotong1988/BERT-tensorflow - BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.
kimiyoung/transformer-xl - Code repository associated with the Transformer-XL paper.
zihangdai/xlnet - Code repository associated with the XLNet paper.
Chainer
- soskek/bert-chainer - Chainer implementation of “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”.
Transfer Learning in NLP
As Jay Alammar put it:
The year 2018 has been an inflection point for machine learning models handling text (or more accurately, Natural Language Processing or NLP for short). Our conceptual understanding of how best to represent words and sentences in a way that best captures underlying meanings and relationships is rapidly evolving. Moreover, the NLP community has been putting forward incredibly powerful components that you can freely download and use in your own models and pipelines (It’s been referred to as NLP’s ImageNet moment, referencing how years ago similar developments accelerated the development of machine learning in Computer Vision tasks).
One of the latest milestones in this development is the release of BERT, an event described as marking the beginning of a new era in NLP. BERT is a model that broke several records for how well models can handle language-based tasks. Soon after the release of the paper describing the model, the team also open-sourced the code of the model, and made available for download versions of the model that were already pre-trained on massive datasets. This is a momentous development since it enables anyone building a machine learning model involving language processing to use this powerhouse as a readily-available component – saving the time, energy, knowledge, and resources that would have gone to training a language-processing model from scratch.
BERT builds on top of a number of clever ideas that have been bubbling up in the NLP community recently – including but not limited to Semi-supervised Sequence Learning (by Andrew Dai and Quoc Le), ELMo (by Matthew Peters and researchers from AI2 and UW CSE), ULMFiT (by fast.ai founder Jeremy Howard and Sebastian Ruder), the OpenAI transformer (by OpenAI researchers Radford, Narasimhan, Salimans, and Sutskever), and the Transformer (Vaswani et al).
ULMFiT: Nailing down Transfer Learning in NLP
ULMFiT introduced methods to effectively utilize a lot of what the model learns during pre-training – more than just embeddings, and more than contextualized embeddings. ULMFiT introduced a language model and a process to effectively fine-tune that language model for various tasks.
NLP finally had a way to do transfer learning probably as well as Computer Vision could.
MultiFiT: Efficient Multi-lingual Language Model Fine-tuning by Sebastian Ruder et al. MultiFiT extends ULMFiT to make it more efficient and more suitable for language modelling beyond English. (EMNLP 2019 paper)
Books
- Transfer Learning for Natural Language Processing - A book that is a practical primer to transfer learning techniques capable of delivering huge improvements to your NLP models.
Other Resources
Expand Other Resources
hanxiao/bert-as-service - Mapping a variable-length sentence to a fixed-length vector using pretrained BERT model.
brightmart/bert_language_understanding - Pre-training of Deep Bidirectional Transformers for Language Understanding: pre-train TextCNN.
algteam/bert-examples - BERT examples.
JayYip/bert-multiple-gpu - A multiple GPU support version of BERT.
HighCWu/keras-bert-tpu - Implementation of BERT that could load official pre-trained models for feature extraction and prediction on TPU.
whqwill/seq2seq-keyphrase-bert - Add BERT to encoder part for https://github.com/memray/seq2seq-keyphrase-pytorch
xu-song/bert_as_language_model - BERT as language model, a fork from Google official BERT implementation.
yuanxiaosc/Deep_dynamic_word_representation - TensorFlow code and pre-trained models for deep dynamic word representation (DDWR). It combines the BERT model and ELMo’s deep context word representation.
Pydataman/bert_examples - Some examples of BERT.
run_classifier.py
based on Google BERT for Kaggle Quora Insincere Questions Classification challenge.run_ner.py
is based on the first season of the Ruijin Hospital AI contest and a NER written by BERT.guotong1988/BERT-chinese - Pre-training of deep bidirectional transformers for Chinese language understanding.
zhongyunuestc/bert_multitask - Multi-task.
Microsoft/AzureML-BERT - End-to-end walk through for fine-tuning BERT using Azure Machine Learning.
bigboNed3/bert_serving - Export BERT model for serving.
yoheikikuta/bert-japanese - BERT with SentencePiece for Japanese text.
nickwalton/AIDungeon - AI Dungeon 2 is a completely AI generated text adventure built with OpenAI’s largest 1.5B param GPT-2 model. It’s a first of it’s kind game that allows you to enter and will react to any action you can imagine.
turtlesoupy/this-word-does-not-exist - “This Word Does Not Exist” is a project that allows people to train a variant of GPT-2 that makes up words, definitions and examples from scratch. We’ve never seen fake text so real.
Tools
jessevig/bertviz - Tool for visualizing attention in the Transformer model.
FastBert - A simple deep learning library that allows developers and data scientists to train and deploy BERT based models for NLP tasks beginning with text classification. The work on FastBert is inspired by fast.ai.
gpt2tc - A small program using the GPT-2 LM to complete and compress texts. It has no external dependency, requires no GPU and is quite fast. The smallest model (117M parameters) is provided. Larger models can be downloaded as well. (no waitlist, no sign up required).
Tasks
Named-Entity Recognition (NER)
Expand NER
kyzhouhzau/BERT-NER - Use google BERT to do CoNLL-2003 NER.
zhpmatrix/bert-sequence-tagging - Chinese sequence labeling.
JamesGu14/BERT-NER-CLI - Bert NER command line tester with step by step setup guide.
mhcao916/NER_Based_on_BERT - This project is based on Google BERT model, which is a Chinese NER.
macanv/BERT-BiLSMT-CRF-NER - TensorFlow solution of NER task using Bi-LSTM-CRF model with Google BERT fine-tuning.
ProHiryu/bert-chinese-ner - Use the pre-trained language model BERT to do Chinese NER.
FuYanzhe2/Name-Entity-Recognition - Lstm-CRF, Lattice-CRF, recent NER related papers.
king-menin/ner-bert - NER task solution (BERT-Bi-LSTM-CRF) with Google BERT https://github.com/google-research.
Classification
Expand Classification
brightmart/sentiment_analysis_fine_grain - Multi-label classification with BERT; Fine Grained Sentiment Analysis from AI challenger.
zhpmatrix/Kaggle-Quora-Insincere-Questions-Classification - Kaggle baseline—fine-tuning BERT and tensor2tensor based Transformer encoder solution.
maksna/bert-fine-tuning-for-chinese-multiclass-classification - Use Google pre-training model BERT to fine-tune for the Chinese multiclass classification.
NLPScott/bert-Chinese-classification-task - BERT Chinese classification practice.
fooSynaptic/BERT_classifer_trial - BERT trial for Chinese corpus classfication.
xiaopingzhong/bert-finetune-for-classfier - Fine-tuning the BERT model while building your own dataset for classification.
Socialbird-AILab/BERT-Classification-Tutorial - Tutorial.
malteos/pytorch-bert-document-classification - Enriching BERT with Knowledge Graph Embedding for Document Classification (PyTorch)
Text Generation
Expand Text Generation
asyml/texar - Toolkit for Text Generation and Beyond. Texar is a general-purpose text generation toolkit, has also implemented BERT here for classification, and text generation applications by combining with Texar’s other modules.
Plug and Play Language Models: a Simple Approach to Controlled Text Generation (PPLM) paper by Uber AI.
Question Answering (QA)
Expand QA
matthew-z/R-net - R-net in PyTorch, with BERT and ELMo.
vliu15/BERT - TensorFlow implementation of BERT for QA.
benywon/ChineseBert - This is a Chinese BERT model specific for question answering.
facebookresearch/SpanBERT - Question Answering on SQuAD; improving pre-training by representing and predicting spans.
Knowledge Graph
Expand Knowledge Graph
sakuranew/BERT-AttributeExtraction - Using BERT for attribute extraction in knowledge graph. Fine-tuning and feature extraction. The BERT-based fine-tuning and feature extraction methods are used to extract knowledge attributes of Baidu Encyclopedia characters.
lvjianxin/Knowledge-extraction - Chinese knowledge-based extraction. Baseline: bi-LSTM+CRF upgrade: BERT pre-training.
License
Expand License
This repository contains a variety of content; some developed by Cedric Chee, and some from third-parties. The third-party content is distributed under the license provided by those parties.
I am providing code and resources in this repository to you under an open source license. Because this is my personal repository, the license you receive to my code and resources is from me and not my employer.
The content developed by Cedric Chee is distributed under the following license:
Code
The code in this repository, including all code samples in the notebooks listed above, is released under the MIT license. Read more at the Open Source Initiative.
Text
The text content of the book is released under the CC-BY-NC-ND license. Read more at Creative Commons.
2022-12-10
when installing global ackages, we do not need to specify NODE_PATH. but it is not configured beforehand thus when you want to import packages from there you will face issue.
for zsh/bash/fish:
1 | export NODE_PATH=<NODE_PATH> |
on windows just use the old school drill (open environment editor)
chech the exact path of NODE_PATH
after invoking npm install -g <package_name>
, then check if the installed package exists in that path you guessed.
on termux: /data/data/com.termux/files/usr/lib/node_modules
on kali: /usr/local/lib/node_modules
(may be inaccurate)
on macos: /opt/homebrew/lib/node_modules
(nodejs installed via brew)