Pages

Saturday, 23 July 2022

Python让川普和习近平的照片唱歌

最近在网上看到不少恶搞各国领导人的视频,只需要一张照片就可以让他们唱歌,很有意思,经过伯衡君的了解,原来是一个网站,名字叫做Yanderify,不过该网站不稳定,进一步在GitHub上找到了项目,原来还可以通过应用程序来实现,让照片来唱歌,很有意思,分享给大家。

项目地址

  • https://github.com/dunnousername/yanderifier

先来看一下项目的介绍:一种人工智能,可以获取源视频和面部图像,并对图像进行动画处理以匹配源图像的运动。

Yanderify是基于Python的项目,我们先需要安装Python。点击下面链接进入Python官网,下载安装即可。

Python官网:https://www.python.org

如果你的操作系统是64位的,那么最好下载64位的Python。以Windows系统为例,Python官网的下载按钮默认可能会提供32位的安装包,这时候你可以进入到Download-Windows页面中,找到64位的安装包下载。

接着,进入Yanderify的GitHub页面。

Yanderify:https://github.com/dunnousername/yanderifier

在其中找到“Release”链点,下载最新发布的Zip包即可。

Yanderify无需安装即可运行,解压后,双击“Start Yanderify”。

Yanderify的界面非常简单,一个命令行窗口+一个毫无装饰的GUI。我们所需要做的,就是选择一张图片素材和一段视频素材,然后设定输出路径。另外,Yanderify对硬件有一些要求。

Yanderify支持N卡的GPU加速,但要求型号高于GTX750,而且有2G以上的显存;而如果你用的是A卡的话,Yanderify没法使用GPU加速,需要勾选“Use CPU”的选项。

在素材的选择上,注意不能选择规格太过高清的图片和视频,不然可能软件会崩溃。

点击“Go”,Yanderify就开始合成素材了。不过如果你是第一次使用Yanderify,那么它还会先下载两个文件。这两个文件的下载速度比较慢,这里把这两个文件的下载地址贴出来,大家可以用下载工具下载。

https://www.adrianbulat.com/downloads/python-fan/s3fd-619a316812.pth

https://www.adrianbulat.com/downloads/python-fan/2DFAN4-11f355bf06.pth.tar

下载完毕后,关闭Yanderify,把两个文件放到以下目录。

 C:\Users\用户名\.torch\models

然后开启Yanderify,就可以正常使用了:


 ------------------------------------------------------

(formerly known as Yanderify) is a front-end tool for first-order-motion. It aims to make using first-order-motion face animation accessible to everyone, for education and entertainment.  

Yanderify is now known as first-order-wrapper to more accurately describe its function.

Since this project is no longer in active development, the name won't be changed everywhere. However, I'm starting to update documentation on all of my projects to make them better, so the name change felt neccessary. You can still refer to the project as Yanderify, and all of the links mentioning yanderify will continue to work. Changing the repo name would destroy any bookmarks or links to here, so that is not going to happen. The old documentation lies below.

first-order-wrapper

first-order-wrapper is a wrapper around first-order-model. It exposes a simple user interface designed to be usable by anyone, with any level of technical skill. first-order-model was previously hard for the average person to use, since it required knowledge of the command line and installation of libraries. Yanderify eliminates these issues by providing a complete environment, with all necessary components bundled inside.

Please see the "releases" tab for the latest build. The repo is not necessarily up to date. However, the latest-v4 branch contains the latest code as of this writing, while as of this writing, master contains code from 2 major versions back.

What it does

first-order-model is an Artificial Intelligence that takes a source video and an image of a face, and animates the image to match the movement of the source image.

Here is an example of what first-order-model can do; this image was created by the First Order Motion Model paper authors, and is taken from their repository. Most of the heavy lifting of Yanderify is done by code written by these paper authors, so I suggest you go check out their repository if you are interested.

Example

How it works

Double-clicking yanderify.exe will bring up a window that looks like this: Screenshot of the program

  • "I don't have NVIDIA >=GTX750": checking this will enable CPU mode, which is a lot slower, but is the only method for users without a compatible graphics card.
  • "Select Video": Clicking this will display a file selection box. This file should be the video you want to animate the new face to; in other words, this video will "drive" the image to move in the same way.
  • "Select Image": This is a cropped picture of the face you want to be animated. In other words, this is the face that the video "wears".
  • "Select Output": This is where your result will be stored.

Just hit "Go," and your video will be re-animated and re-encoded with the source audio!

Addendum

Join our discord server (updated again) A lot of people have asked for me to make a twitter. I probably won't be very active on it, but here ya go: @dunnousername2

from https://github.com/dunnousername/yanderifier 

-----------------------------------------------------------------

火爆社交网络的蚂蚁牙黑是如何实现的?用这个APP轻松搞定

内容详情

其实这次火爆的让照片可以做视频中的人所做的事情,使用的是Yanderify项目所呈现的技术.

而使用这种技术,制作的app让不懂技术的朋友也能用上,那么这个app叫做什么呢?名字叫做Avatarify。

该应用目前只能使用iPhone或者iPad下载哦,苹果用户有福啦,安卓用户深表受到歧视,安卓党还不快换手机,嘿嘿。

打开这个应用,选择一个本机相册中的照片,比如,找了一个爱因斯坦的照片。

随后,选择下一步,找到一个名为mai-ha-hi的视频示例,点击一下,有高质量和普通质量选项,我选择了高质量的。

等待生成后,就可以看到生成的视频了。

这样就可以保存和分享视频了,怎么样,是不是很有意思? 

------------------------------------

使用WOMBO这款APP让照片疯狂唱歌,就是这么interesting!

发现一款有意思的应用,名字叫做WOMBO,只需要上传一张照片,就可以让这张照片嘴型完整无缝对接,就像真人在唱歌哦,手舞足蹈的,特别有意思,比之前分享的那个让照片唱歌的AI技术还要高级,之前的那篇文章入手门槛比较高,而这次为大家分享的这个就是傻瓜级别的入手门槛了,只要下载app就能用,分享给大家。

内容详情

该应用名字叫做WOMBO,可以在App Store和Play store上轻松下载到,只需要搜索该名称就可以了。

这个叫做“ WOMBO”的APP,结合了以前对嘴形最佳的软体技术,以及最新的AI变脸技术,所以可以让制作出来的人物开始唱歌,而且可以维妙维肖。很多洗脑音乐,就算是影像合成的没有那么好,但是一搭上洗脑音乐,就可以让你忽略视觉上的不美好。

WOMBO厉害的地方在于,他不只是让平面的一张桌子脸会唱歌。你可以看到上面这个效果,他的脸部动作表情丰富,而且一直可能摆动,跟你以前看到的单纯"对嘴型唱歌的动作只是动一张嘴的效果完全不同"。

步骤很简单,就是下载app后,上传一张照片,选择一首歌曲,等待一会儿,就会生成一个手舞足蹈的唱歌视频了。而更厉害的一个技术是,他不要求你的照片一定要是“正面照”,你的照片可以侧面,仰角,甚至回眸照,甚至眼睛闭着也能让他睁眼。

 

 

 

 

 

 

 

 

No comments:

Post a Comment