Music To Video Generator Gan
Researchers have developed a Text-to-Video/Music-to-Video generator GAN that creates dance animations based on music genres. This novel approach utilizes a choreography-oriented embedding framework and cross-modal transformers to build a 3D dance dataset, allowing for the generation of unique dance animations synchronized with specific musical styles.
Text to Video/Music to video generator GAN
https://www.youtube.com/watch?v=V8MlYa_yhF0
https://netease-gameai.github.io/ChoreoMaster/Paper.pdf
该系统可依据音乐风格生成爵士、二次元、街舞等不同类型的舞蹈动画。给定一段音乐,舞蹈演员可以自动生成高质量的舞蹈动作序列以伴随输入音乐的风格、节奏和结构。为了实现这一目标,我们引入了一种新的面向编舞的编舞音乐嵌入框架,它成功地构建了一个统一的舞蹈音乐嵌入空间音乐和舞蹈短语之间的风格和节奏关系。
https://www.youtube.com/watch?v=VrVsAcgFK_4
该方法提出了一个基于cross-modal transformer的架构模型和一个新的3D舞蹈数据集,该数据集包含了根据真实舞者重建的3D运动
项目地址: https://google.github.io/aichoreographer
数据集地址: https://google.github.io/aistplusplus_dataset/
欢迎点赞、评论、分享、收藏!
video generation using music based on bigGAN:
https://github.com/Remideza/MichelAI/
bigGAN Large Scale GAN Training for High Fidelity Natural Image Synthesis:
https://github.com/ajbrock/BigGAN-PyTorch
dance video generation self-supervised:
https://github.com/xrenaa/Music-Dance-Video-Synthesis
show me what and tell me how based on openai clip by snap research with pretrained models, able to generate arbitrary video based on text description:
https://github.com/snap-research/MMVID
text to video generator based on vqgan and clip with primitive colab notebooks by kapwing the online video editor: