model
2024-02-17
2022-05-29
via code:
https://zhuanlan.zhihu.com/p/390826470
from transformers import AutoModelForMaskedLM, AutoTokenizer
checkpoint = “camembert-base”
model = AutoModelForMaskedLM.from_pretrained(checkpoint)
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model.push_to_hub(“dummy-model”)
tokenizer.push_to_hub(“dummy-model”)
config.push_to_hub(““)
2022-03-30
currently we can use hyperopt as the online optimizer. of course for offline optimization there’s better option or prediction for it.
For a sequence of models, use classic logic solver to find best logic combination.
select model 0.
select model 1 with 0, iterate through 16 different situations(0, not 0, 1, not 1, 0 and 1, 0 or 1, 0 and not 1, 0 or not 1, not 0 and 1,not 0 or 1, not 0 and not 1, not 0 or not 1), choose the best one. mark it as model A.
select model 2, use the same optimizer to generate model B.
finally iterate through all models. generte model X as a combination of best logic models.