deepfacelab leading software for faceswap video generation:
https://github.com/iperov/DeepFaceLab
faceswap:
https://github.com/deepfakes/faceswap
arbitrary face swap on one single model:
deepfacelab leading software for faceswap video generation:
https://github.com/iperov/DeepFaceLab
faceswap:
https://github.com/deepfakes/faceswap
arbitrary face swap on one single model:
最新版代码:
https://github.com/transpchan/Live3D-v2
MMD 格式转换工具:
https://github.com/KurisuMakise004/MMD2UDP
官网:https://transpchan.github.io/live3d/
Colab:https://colab.research.google.com/github/transpchan/Live3D-v2/blob/main/notebook.ipynb
CoNR群:362985749
【[ai绘画]仅用4张图片合成一段舞蹈视频-哔哩哔哩】 https://b23.tv/NaF20nA
用到的资料和项目地址
BV19V4y1x7bJ
GitHub:https://github.com/megvii-research/CoNR
GitHub:https://github.com/KurisuMakise004/MMD2UDP
配音:基于VITS的弥希miki音声:BV1vW4y1e7bn
bgm:风神少女
视频如果有什么不对的地方,欢迎指出(^▽^)
侵删
still image to dancing
everybody dance now:
https://github.com/carolineec/EverybodyDanceNow
edn pytorch implenentation:
https://wk.baidu.com/view/329daafdbaf3f90f76c66137ee06eff9aef84994
http://www.360doc.cn/mip/982055971.html
anime character database contains dialogs:
https://www.animecharactersdatabase.com/episodetranscript.php?pid=1915&epid=1
quodb the movie quotes database:
scifi movie script example:
http://www.scifiscripts.com/cartoon/howls_moving_castle.html
japanese bangume script:
http://akita.cool.ne.jp/hikoseki/script/laputascript_v210all_n.html
May not work:
https://jref.com/threads/where-in-internet-could-i-find-anime-dialogs-scripts-in-japanese.43408/
suggest you find this in zhihu.
https://zhuanlan.zhihu.com/p/389873370
https://zhuanlan.zhihu.com/p/450050772
find movie quotes:
33.agilestudio.cn
https://zhaotaici.cn/mindex.html
Bangume info wiki:
thin plate spline motion model
video animation generation only using few character portraits: 根据角色设定图画出人物动画(有动画驱动器)
https://github.com/megvii-research/CoNR
galgame video generator using pygame(自动化类galgame动画生成器):
https://github.com/w4123/TRPG-Replay-Generator
digan, could generate taichi videos
suggest you to segment video first and then use this to do the freaking work.
https://github.com/sihyun-yu/digan?ref=pythonawesome.com
https://sihyun-yu.github.io/digan/
MoCoGAN can generate the same object performing different actions, as well as the same action performed by different objects:
https://github.com/sergeytulyakov/mocogan
still image to talking with hands moving video generation compared to FOMM, can even animate non-human objects:
https://snap-research.github.io/articulated-animation/
montage.ai generate video by music with deep analyzer:
https://github.com/Tartar-san/montage.ai
tiktok hashtag montage:
open sourced text to image:
https://github.com/lucidrains/DALLE-pytorch
dalle_mini:
https://github.com/borisdayma/dalle-mini
jina ai human in the loop multi prompt text to image dalle-flow:
https://github.com/jina-ai/dalle-flow
dalle playground:
image local search by similarity:
https://github.com/ProvenanceLabs/image-match
anime image search by scene:
https://github.com/soruly/trace.moe
bing image search:
https://cn.bing.com/visualsearch/Microsoft/SimilarImages
https://cn.bing.com/visualsearch
sougou shitu:
https://pic.sogou.com/shitu/index.html
baidu shitu:
shitu.baidu.com
google image search:
https://gfsoso.soik.top/shitu.html
yandex image search:
image search websites:
https://zhuanlan.zhihu.com/p/52693499
tutorial on image search, gif search, image enlargement, browser plugins:
https://www.bilibili.com/read/cv8688532
duososo shitu(include other meta search engines):
http://duososo.com/index_shitu.php
zhihu image search websites:
https://zhuanlan.zhihu.com/p/25610099
find font by images(not working qiuziti.com):
https://zhuanlan.zhihu.com/p/25440271?refer=wnsouba
find bangumi segments by image:
还有一个telegram bot 叫 WhatAnimeBot
qq聊天记录导出 qq消息导出
聊天记录渲染成图片 render chat record to picture
conclusion so far: people like to use vue to recreate popular interfaces, and you may grab some interface from it.
1 | npm install vue-mchat |
vue 本项目是一个在线聊天系统,最大程度的还原了Mac客户端QQ。
vue-miniQQ————基于Vue2实现的仿手机QQ单页面应用
render chat record to picture 微信聊天记录渲染成图片
HTML5 WebSocket 仿微信界面的网页群聊演示Demo
Simple chatbot exercise using only JavaScript, HTML, CSS
一个基于AngularJS、NodeJS、Express、socket.io搭建的在线聊天室。
qq空间发美女图片把人家的脸要挡住 或者要把脸换了 或者直接使用live2d three.js 甚至3d的渲染模型来把脸给它挡住
somehow the wechat web uos protocol is usable again? check it out.
https://www.npmjs.com/package/wechaty-puppet-wechat
https://github.com/wechaty/puppet-wechat/pull/206
would it be a lot easier if we can send those article/video links to external (out of gfw) social media platforms in their native language? still censorship will be applied.
wechat frida hook on macos:
https://github.com/dounine/wechat-frida
WeChat PC Frida hook:
https://github.com/K265/frida-wechat-sticker
https://github.com/kingking888/frida_wechat_hook
qq群最多可以添加500个群 1500个好友 其中群可加的数量 = max(0,500 - 已加入群数量 - 好友数量)
可以退出一些安静的群 不发红包的群 删除好友
屏蔽别人加我为好友 允许别人拉我进群 自动退出广告群 退出不活跃的群
群一天只能加两三个 或者手机上可以加十个
好友一天可以加三十几个
一个验证QQ群的Python代码
https://www.bilibili.com/read/mobile?id=10044756
frida inject mobile android qq and open qzone:
https://github.com/xhtechxposed/fridainjectqq
search https://qun.qq.com in search engines
可以考虑截图获取QQ群验证问题 或者手机测试 appium
if possible then just use frida/radare2 or some reverse engineering to automate the process.
radare2 -> rizin.re(radare2 fork) based, ida alike, with ghidra decompiler, reverse engineering tool:
如何获取进群验证问题?记得可以拦截PC端搜索QQ群接收的数据包获取验证问题 或许不行 总之可以获取到一些参数 查看是否包含验证问题 是不是允许任何人进群 也可以考虑拦截opqqq的通信 或者发送一些通用的加群验证信息 比如“加群学习” “小伙伴一起玩” 之类的 或者用ai模型根据群描述 群主题 生成
一个手机号码可以申请10个qq号,一个手机号绑定的QQ帐号名额上限为10个,但一天一个手机号只能成功注册两到三个
WeChat needs serious reverse engineering like frida.
https://github.com/cixingguangming55555/wechat-bot
有webapi的微信机器人 注入dll到pc
https://github.com/mrsanshui/WeChatPYAPI
可以加好友的python wechat pc hook
https://github.com/snlie/WeChat-Hook
易语言的wechat hook 功能非常全 搜索 加人 有教程链接 教学代码
https://github.com/TonyChen56/WeChatRobot
比较老的wechat逆向模块 wechatapis.dll半天获得不了 有教程链接
https://github.com/wechaty/puppet-xp
frida 驱动的wechat puppet 暂时没有加人 搜索人 在windows上运行
wechat reverse engineering tutorials:
https://github.com/hedada-hc/pc_wechat_hook
https://github.com/zmrbak/PcWeChatHooK
wechaty base framework:
https://github.com/Wechaty/python-wechaty/ (puppet support might be incomplete)
https://github.com/Wechaty/wechaty/
botoy opqbot api for python
https://botoy.opqbot.com/zh_CN/latest/action/
qq opqbot (for wechat it has rstbot) download and install (need gitter.im api token):
https://docs.opqbot.com/guide/manual.html#启动失败
opqbot needs to be reverse engineered or we won’t know what is going on inside.
unofficial opqbot wiki:
https://mcenjoy.cn/opqbotwiki/
wechat bot(non-free wechat puppets):
wechaty
quoted content are controversial and highly viral. must be filtered and classified before proceeding.
quotes are like comments.
vpaint
https://github.com/dalboris/vpaint
vgc
opentoolz v1.4 and later
synfig vector graphic animation:
synfig.org
2d animation tool:
sadtalker
wombo.ai, likely to be talking head or yanderifier
https://github.com/mchong6/GANsNRoses/
https://github.com/williamyang1991/VToonify
生成高质量的艺术人像视频是计算机图形学和视觉中一项重要且理想的任务。虽然已经提出了一系列基于强大的 StyleGAN 成功的人像图像卡通化模型,但这些面向图像的方法在应用于视频时存在明显的局限性,在这项工作中,我们通过引入一种新颖的 VToonify 框架来研究具有挑战性的可控高分辨率肖像视频风格迁移。具体来说,VToonify 利用StyleGAN 的中高分辨率层基于编码器提取的多尺度内容特征来渲染高质量的艺术肖像,以更好地保留帧细节。作为输入,有助于输出具有自然运动的完整面部区域。 amework 与现有的基于 StyleGAN 的图像卡通化模型兼容,以将其扩展到视频卡通化,并继承了这些模型的吸引人的特性,可灵活地控制颜色和强度。这项工作展示了基于 Toonify 和 DualStyleGAN 的 VToonify 的两个实例,用于基于集合广泛的实验结果证明了我们提出的 VToonify 框架在生成具有灵活风格控制的高质量和时间连贯的艺术肖像视频方面优于现有方法的有效性
all in one colab text to talking face generation, also consider paddlespeech example:
https://github.com/ChintanTrivedi/ask-fake-ai-karen
avaliable from paddlegan as an example used in paddlespeech, the artificial host.
lip-sync accurate wav2lip:
https://github.com/Rudrabha/Wav2Lip
lipgan generate realistic lip-sync talking head animation(fully_pythonic branch or google colab notebook):
https://github.com/Rudrabha/LipGAN
google’s lipsync implementation, using tensorflow facemesh:
https://github.com/google/lipsync
https://lipsync.withyoutube.com/
https://github.com/tensorflow/tfjs-models/tree/master/facemesh
network reverse engineering for wombo.ai:
https://github.com/the-garlic-os/wombo-reverse-engineering
matamata using vosk models, recommend to use gentle lip-sync method:
https://github.com/AI-Spawn/Auto-Lip-Sync
https://github.com/Matamata-Animator/Matamata-Core
https://github.com/Yey007/Auto-Lip-Sync
ai-based lip reading might be irrelevant to lip-sync video generation: