Make-A-Video And Its Related Text To Video Projects
text-to-video
make-a-video
Maria
OFA
GEN-2
CogVideo
Nuwa
MoCoGAN
tgan-pytorch
Redditube
Ningyov
multimodal
deep learning
video generation
Text-to-video projects like make-a-video, Maria, OFA, GEN-2, CogVideo, Nuwa, MoCoGAN, tgan-pytorch, Redditube, and Ningyov employ multimodal techniques with distinctive features, leveraging deep learning for diverse video generation tasks.
saying “video2video” is much simpler than “text2video”, I also want to add basic editing and semantic alignment is also simpler than this.
similar models, since video generating models are usually multimodal
maria, A Visual Experience Powered Conversational Agent, suggested by incident
OFA Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
according to its paper, it’s been compared to a range of models
cogvideo able to process chinese and english input
make a video in pytorch text to video generation
nuwa text to video generation
there are also some projects being a video generator but not so much deeplearning involved
Automatic-Youtube-Reddit-Text-To-Speech-Video-Generator-and-Uploader
tools for slideshow, video effects, presentations
vidshow Simple CLI to generate slideshow video with native FFMPEG
Twitch-Best-Of create best-of videos on twitch without token
Ningyov galgame effects