This article discusses the use of MMDetection for creating 3D avatars with accurate human pose detection. It provides detailed explanations and GitHub resources to help readers implement this technique effectively.

3d 虚拟形象动作生成 视频生成 虚拟偶像 Vtuber:

https://github.com/xianfei/SysMocap

human pose detection:

https://github.com/facebookresearch/VideoPose3D

opengl recording:

https://lencerf.github.io/post/2019-09-21-save-the-opengl-rendering-to-image-file/

http://www.songho.ca/opengl/gl_pbo.html#pack

https://stackoverflow.com/questions/7634966/save-opengl-rendering-to-video

https://www.codeproject.com/articles/15941/recording-directx-and-opengl-rendered-animations

https://www.glfw.org/documentation.html

download expose models:

https://expose.is.tue.mpg.de/downloads

smpl-x model download:

https://smpl-x.is.tue.mpg.de/download.php

model zoo:

https://github.com/Zhongdao/Towards-Realtime-MOT/blob/master/DATASET_ZOO.md

mmd auto tracking:

https://github.com/errno-mmd/mmdmatic/blob/master/setup.bat

https://github.com/miu200521358/expose_mmd

https://github.com/miu200521358/AlphaPose-MMD

smplx expose alternative body tracker:

https://github.com/vchoutas/smplx

face tracking:

https://github.com/Aditya-Khadilkar/Face-tracking-with-Anime-characters

anime face detector:

https://github.com/nagadomi/lbpcascade_animeface

https://github.com/qhgz2013/anime-face-detector

anime facial features:

https://github.com/pranau97/anime-detection

repair anime images:

https://github.com/youyuge34/Anime-InPainting

paint manga from sketch (with color blocks):

https://github.com/youyuge34/PI-REC

if we can re-trace the action/expression done by vtubers, we can monetize those “highlight cuts”.

you can firstly find points in datasets and then generate mmd videos, and then create trainset. you can also generate pose from raw video and then create dataset.

found occasionally when browsing MMD, but found this with so many stars, which is an instance detection/segmentation library.

https://github.com/open-mmlab/mmdetection

while rendering mmd can be done with mmd viewer like https://github.com/benikabocha/saba or could use renderer like blender or unity. we must bake physics before dancing.

found other dedicated renderer for mmd, with bullet physics:

https://github.com/jinfagang/mmc

found interesting repo of poetry composing:

https://github.com/jinfagang/tensorflow_poems

mediapipe/paddlevideo alike:

https://pypi.org/project/alfred-py/

three.js has multiple loaders:

https://github.com/mrdoob/three.js/tree/dev/examples/js/loaders

https://github.com/hanakla/three-mmd-loader

render MMD using saba lib:

https://github.com/WLiangJun/MMD-Desktop-mascot

https://github.com/miu200521358/expose_mmd/fork

music based dance:

https://github.com/DeepVTuber/DanceNet3D

https://github.com/ColbyZhuang/music2dance_DanceNet

https://github.com/caijianfei/Music2Dance

characters:

https://www.mixamo.com/#/?page=1&type=Character

Comments