2022-05-31
Jiggy Boring Still Image To Funny Dance Video 跳舞 舞蹈

Jiggy boring still image to funny dance video

最新版代码:

https://github.com/transpchan/Live3D-v2

MMD 格式转换工具:

https://github.com/KurisuMakise004/MMD2UDP

官网:https://transpchan.github.io/live3d/

Colab:https://colab.research.google.com/github/transpchan/Live3D-v2/blob/main/notebook.ipynb

CoNR群:362985749

【[ai绘画]仅用4张图片合成一段舞蹈视频-哔哩哔哩】 https://b23.tv/NaF20nA

用到的资料和项目地址

BV19V4y1x7bJ

GitHub:https://github.com/megvii-research/CoNR

GitHub:https://github.com/KurisuMakise004/MMD2UDP

配音:基于VITS的弥希miki音声:BV1vW4y1e7bn

bgm:风神少女

视频如果有什么不对的地方,欢迎指出(^▽^)

侵删

still image to dancing

everybody dance now:

https://github.com/carolineec/EverybodyDanceNow

edn pytorch implenentation:

https://github.com/Lotayou/everybody_dance_now_pytorch

Read More

2022-05-29
Gan Generating Video Motion Driven Still Image To Video

thin plate spline motion model

video animation generation only using few character portraits: 根据角色设定图画出人物动画(有动画驱动器)

https://github.com/megvii-research/CoNR

galgame video generator using pygame(自动化类galgame动画生成器):

https://github.com/w4123/TRPG-Replay-Generator

digan, could generate taichi videos

suggest you to segment video first and then use this to do the freaking work.

https://github.com/sihyun-yu/digan?ref=pythonawesome.com

https://sihyun-yu.github.io/digan/

https://pythonawesome.com/official-pytorch-implementation-of-generating-videos-with-dynamics-aware-implicit-generative-adversarial-networks/

MoCoGAN can generate the same object performing different actions, as well as the same action performed by different objects:

https://github.com/sergeytulyakov/mocogan

still image to talking with hands moving video generation compared to FOMM, can even animate non-human objects:

https://snap-research.github.io/articulated-animation/

montage.ai generate video by music with deep analyzer:

https://github.com/Tartar-san/montage.ai

tiktok hashtag montage:

https://github.com/andreabenedetti/tiktok-montage

Read More

2022-05-13
The Singing Bot

the still image to singing face bot, lip-sync video generation

sadtalker

wombo.ai, likely to be talking head or yanderifier

https://github.com/mchong6/GANsNRoses/

https://github.com/williamyang1991/VToonify

生成高质量的艺术人像视频是计算机图形学和视觉中一项重要且理想的任务。虽然已经提出了一系列基于强大的 StyleGAN 成功的人像图像卡通化模型,但这些面向图像的方法在应用于视频时存在明显的局限性,在这项工作中,我们通过引入一种新颖的 VToonify 框架来研究具有挑战性的可控高分辨率肖像视频风格迁移。具体来说,VToonify 利用StyleGAN 的中高分辨率层基于编码器提取的多尺度内容特征来渲染高质量的艺术肖像,以更好地保留帧细节。作为输入,有助于输出具有自然运动的完整面部区域。 amework 与现有的基于 StyleGAN 的图像卡通化模型兼容,以将其扩展到视频卡通化,并继承了这些模型的吸引人的特性,可灵活地控制颜色和强度。这项工作展示了基于 Toonify 和 DualStyleGAN 的 VToonify 的两个实例,用于基于集合广泛的实验结果证明了我们提出的 VToonify 框架在生成具有灵活风格控制的高质量和时间连贯的艺术肖像视频方面优于现有方法的有效性

all in one colab text to talking face generation, also consider paddlespeech example:

https://github.com/ChintanTrivedi/ask-fake-ai-karen

avaliable from paddlegan as an example used in paddlespeech, the artificial host.

lip-sync accurate wav2lip:

https://github.com/Rudrabha/Wav2Lip

lipgan generate realistic lip-sync talking head animation(fully_pythonic branch or google colab notebook):

https://github.com/Rudrabha/LipGAN

google’s lipsync implementation, using tensorflow facemesh:

https://github.com/google/lipsync

https://lipsync.withyoutube.com/

https://github.com/tensorflow/tfjs-models/tree/master/facemesh

network reverse engineering for wombo.ai:

https://github.com/the-garlic-os/wombo-reverse-engineering

matamata using vosk models, recommend to use gentle lip-sync method:

https://github.com/AI-Spawn/Auto-Lip-Sync

https://github.com/Matamata-Animator/Matamata-Core

https://github.com/Yey007/Auto-Lip-Sync

ai-based lip reading might be irrelevant to lip-sync video generation:

https://github.com/eflood23/lipsync

Read More

2022-04-21
Articulated Animation

This one is dead simple. Use real human to talk as you go.

https://github.com/snap-research/articulated-animation

Read More