This article delves into various methods and tools, including CoNR, digan, MoCoGAN, TRPG Replay Generator, Montage.ai, and TikTok montages, that employ techniques such as GANs and thin-plate spline motion models to create videos from images or portraits.

thin plate spline motion model

video animation generation only using few character portraits: 根据角色设定图画出人物动画(有动画驱动器)

https://github.com/megvii-research/CoNR

galgame video generator using pygame(自动化类galgame动画生成器):

https://github.com/w4123/TRPG-Replay-Generator

digan, could generate taichi videos

suggest you to segment video first and then use this to do the freaking work.

https://github.com/sihyun-yu/digan?ref=pythonawesome.com

https://sihyun-yu.github.io/digan/

https://pythonawesome.com/official-pytorch-implementation-of-generating-videos-with-dynamics-aware-implicit-generative-adversarial-networks/

MoCoGAN can generate the same object performing different actions, as well as the same action performed by different objects:

https://github.com/sergeytulyakov/mocogan

still image to talking with hands moving video generation compared to FOMM, can even animate non-human objects:

https://snap-research.github.io/articulated-animation/

montage.ai generate video by music with deep analyzer:

https://github.com/Tartar-san/montage.ai

tiktok hashtag montage:

https://github.com/andreabenedetti/tiktok-montage

Comments