哔哩哔哩 直播姬 2D模型 3D模型

anime character segmentation
animoji
avatarkit
image segmentation
pyjom
talking head
video driven model
video generator
vtuber
waifu segmentation
This article delves into the development of anime-style avatars, emphasizing the significance of 3D models, Linux compatibility, and face tracking tools. It explores various techniques such as moeflow, AniSeg, NextHuman Beta0.9, FaceRig, Style GAN, Python, and facial landmark detection for creating digital people and animating them. Furthermore, it discusses applications like Animoji, VTuber talking heads, and live streaming in the context of this avatar development.
Published

August 13, 2022


3d pose tracker

rendered on unity. needs GPU.

Sysmocap

WHAT I WANT FOR (or nearly) requires real 3d models, written in javascript

cannot output video?

A cross-platform real-time video-driven motion capture and 3D virtual character rendering system for VTuber/Live/AR/VR.

Does not require a discrete graphics card and runs smoothly even on eight-year-old computers

Vtuber python unity

search for “vtuber” along with “motion capture” you will get many head-only trackers and renderers for windows but not linux, also some “broadcast templates/frameworks”. many support one single image (anime head + remove background) as input instead of 2d/3d models

face tracking only, showing face, mouth and eyes, head directions, bind to live2d models

虚拟数字人 metahuman

NextHuman Beta0.9上线公测,5分钟高品质讲解,带你进入数字人“零门槛”创作新时代,体验直通车 -> https://nexthuman.cn 免费版是Windows上面跑的 需要高端1070显卡

anime character segmentation

to remove false positives, make sure we have anime face in view, otherwise mark it as a false positive.

you can use anime character recognition like moeflow or opencv anime face detector along with some phash or perceptual hash library to group similar characters, compare perceptual image similarity and line them up in a series.

aniseg, able to segment anime character and head, using mask-rcnn

yet another anime character segmentation model using solov2 and condinst

waifu segmentation

high accuracy anime character segmentation

自动画漫画 画几笔就成某个人像 动漫头像

https://menyifang.github.io/projects/DCTNet/DCTNet.html

自动捏脸 gan给人脸戴口罩

https://github.com/futscdav/Chunkmogrify

selfie to anime, picture to anime photos

selfie2anime with trained models

##原神mmd下载模型

模之屋(需要注册):

https://www.aplaybox.com/u/680828836

夕蓝资源网(可直接下载) 也有其他的3d模型可以下载:

https://www.seoliye.com/tags/53.html

use voice to power up static images

voice powered animated cartoon figure

jeeliz (some web deep learning runtime, like tensorflow.js) powered

weboji, highly similar to animoji, with three.js and cute fox avatar

face filter, alter the face like putting glass, minor changes to avoid privacy/copyright concerns?

openface

facial features extraction

facerig

facerig location: /Software/Program Files (x86)/FaceRig

i’ve seen python code inside facerig.

facerig does not offer head-only rendering, but that could be changed i suppose?

avatarify python

infinite avatars by using style gan, first order motion model

create static portrait avatar (svg?)

animoji from apple

facial landmark detection in python, animoji-animate

animoji apple private framework 实际上这个就是之前看到的会动的狗屎的视频来源

2d模型 皮套 可动 虚拟Vtuber talking head

https://github.com/yuyuyzl/EasyVtuber

https://github.com/pkhungurn/talking-head-anime-3-demo

https://github.com/GunwooHan/EasyVtuber

b站官方

直播姬现在支持2d面部捕捉 3d模型动作捕捉

直播姬版本有windows macos(m1) Android版本

2d模型是live2d的模型

有待研究