Total Pageviews

Monday 28 March 2022

GFPGAN

GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration.

download PyPI Open issue Closed issue LICENSE python lint Publish-pip

  1. Colab Demo for GFPGAN google colab logo; (Another Colab Demo for the original paper model)
  2. Online demo: Huggingface (return only the cropped face)
  3. Online demo: Replicate.ai (may need to sign in, return the whole image)
  4. Online demo: Baseten.co (backed by GPU, returns the whole image)
  5. We provide a clean version of GFPGAN, which can run without CUDA extensions. So that it can run in Windows or on CPU mode.

🚀 Thanks for your interest in our work. You may also want to check our new updates on the tiny models for anime images and videos in Real-ESRGAN 😊

GFPGAN aims at developing a Practical Algorithm for Real-world Face Restoration.
It leverages rich and diverse priors encapsulated in a pretrained face GAN (e.g., StyleGAN2) for blind face restoration.

 Frequently Asked Questions can be found in FAQ.md.

🚩 Updates

  • 🔥🔥 Add V1.3 model, which produces more natural restoration results, and better results on very low-quality / high-quality inputs. See more in Model zooComparisons.md
  •  Integrated to Huggingface Spaces with Gradio. See Gradio Web Demo.
  •  Support enhancing non-face regions (background) with Real-ESRGAN.
  •  We provide a clean version of GFPGAN, which does not require CUDA extensions.
  •  We provide an updated model without colorizing faces.

If GFPGAN is helpful in your photos/projects, please help to  this repo or recommend it to your friends. Thanks😊 Other recommended projects:
▶️ Real-ESRGAN: A practical algorithm for general image restoration
▶️ BasicSR: An open-source image and video restoration toolbox
▶️ facexlib: A collection that provides useful face-relation functions
▶️ HandyView: A PyQt5-based image viewer that is handy for view and comparison


📖 GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior

[Paper]   [Project Page]   [Demo]
Xintao WangYu LiHonglun ZhangYing Shan
Applied Research Center (ARC), Tencent PCG


🔧 Dependencies and Installation

Installation

We now provide a clean version of GFPGAN, which does not require customized CUDA extensions.
If you want to use the original model in our paper, please see PaperModel.md for installation.

  1. Clone repo

    git clone https://github.com/TencentARC/GFPGAN.git
    cd GFPGAN
  2. Install dependent packages

    # Install basicsr - https://github.com/xinntao/BasicSR
    # We use BasicSR for both training and inference
    pip install basicsr
    
    # Install facexlib - https://github.com/xinntao/facexlib
    # We use face detection and face restoration helper in the facexlib package
    pip install facexlib
    
    pip install -r requirements.txt
    python setup.py develop
    
    # If you want to enhance the background (non-face) regions with Real-ESRGAN,
    # you also need to install the realesrgan package
    pip install realesrgan

 Quick Inference

We take the v1.3 version for an example. More models can be found here.

Download pre-trained models: GFPGANv1.3.pth

wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth -P experiments/pretrained_models

Inference!

python inference_gfpgan.py -i inputs/whole_imgs -o results -v 1.3 -s 2
Usage: python inference_gfpgan.py -i inputs/whole_imgs -o results -v 1.3 -s 2 [options]...

  -h                   show this help
  -i input             Input image or folder. Default: inputs/whole_imgs
  -o output            Output folder. Default: results
  -v version           GFPGAN model version. Option: 1 | 1.2 | 1.3. Default: 1.3
  -s upscale           The final upsampling scale of the image. Default: 2
  -bg_upsampler        background upsampler. Default: realesrgan
  -bg_tile             Tile size for background sampler, 0 for no tile during testing. Default: 400
  -suffix              Suffix of the restored faces
  -only_center_face    Only restore the center face
  -aligned             Input are aligned faces
  -ext                 Image extension. Options: auto | jpg | png, auto means using the same extension as inputs. Default: auto

If you want to use the original model in our paper, please see PaperModel.md for installation and inference.

🏰 Model Zoo

VersionModel NameDescription
V1.3GFPGANv1.3.pthBased on V1.2; more natural restoration results; better results on very low-quality / high-quality inputs.
V1.2GFPGANCleanv1-NoCE-C2.pthNo colorization; no CUDA extensions are required. Trained with more data with pre-processing.
V1GFPGANv1.pthThe paper model, with colorization.

The comparisons are in Comparisons.md.

Note that V1.3 is not always better than V1.2. You may need to select different models based on your purpose and inputs.

VersionStrengthsWeaknesses
V1.3✓ natural outputs
✓better results on very low-quality inputs
✓ work on relatively high-quality inputs
✓ can have repeated (twice) restorations
✗ not very sharp
✗ have a slight change on identity
V1.2✓ sharper output
✓ with beauty makeup
✗ some outputs are unnatural

You can find more models (such as the discriminators) here: [Google Drive], OR [Tencent Cloud 腾讯微云]

💻 Training

We provide the training codes for GFPGAN (used in our paper).
You could improve it according to your own needs.

Tips

  1. More high quality faces can improve the restoration quality.
  2. You may need to perform some pre-processing, such as beauty makeup.

Procedures

(You can try a simple version ( options/train_gfpgan_v1_simple.yml) that does not require face component landmarks.)

  1. Dataset preparation: FFHQ

  2. Download pre-trained models and other data. Put them in the experiments/pretrained_models folder.

    1. Pre-trained StyleGAN2 model: StyleGAN2_512_Cmul1_FFHQ_B12G4_scratch_800k.pth
    2. Component locations of FFHQ: FFHQ_eye_mouth_landmarks_512.pth
    3. A simple ArcFace model: arcface_resnet18.pth
  3. Modify the configuration file options/train_gfpgan_v1.yml accordingly.

  4. Training

python -m torch.distributed.launch --nproc_per_node=4 --master_port=22021 gfpgan/train.py -opt options/train_gfpgan_v1.yml --launcher pytorch

from  https://github.com/TencentARC/GFPGAN

(GFPGAN

演示地址:https://replicate.com/tencentarc/gfpgan
一个老照片恢复到高清的项目,旨在开发用于真实世界的人脸恢复的实用算法,可以用来恢复老照片或改善人工智能生成的人脸,效果还可以,主要是修复人脸,对于其它地方的修复不太理想。
https://github.com/TencentARC/GFPGAN) 

(GFPGAN-腾讯开源AI老照片修复工具

GFPGAN是腾讯开源的人脸修复算法,它利用预先训练好的面部GAN(例如StyleGAN2)中封装的丰富和多样的先验因素进行盲脸(blind face) 修复,旨在开发一种用于现实世界人脸恢复的实用算法。

试了一下在线的demo,修复了一下老照片,效果非常惊艳,感兴趣的同学可以试试,使用GitHub登录即可。

演示地址:https://replicate.com/tencentarc/gfpgan

GitHub:https://github.com/TencentARC/GFPGAN)

--------------------------------------------------------

 图片无损放大工具,马赛克画质秒变高清大图!

一位作者利用腾讯ARC实验室最新的图像超分辨率模型制作了一款图片高清放大工具。

这款工具相较于其他的图片高清放大工具,可以说是超级简单,界面简单,操作简单。

你只需要将你要放大的图片导入软件,然后点击生成按钮,稍等片刻,即可获得放大后的高清图片(放大后的图片和原图片在同一路径)。亲自测试了一下, 效果非常明显。

是不是整个世界都清晰了呢?腾讯ARC实验室的这个图像超分辨率模型Real-ESRGAN是基于ESRGAN的改进研究,重点考虑了消除低分辩率图像中的振铃和伪影,对真实风景图片能更加恢复其细节。

感兴趣的小伙伴可以下载图片自己亲自试一试!

项目地址:https://github.com/xinntao/Real-ESRGAN

-------------------------------------------

基于腾讯 ARC 的 AI 图片无损放大工具:AI LOSSLESS ZOOMER

大家平时在网上看到喜欢的图片,希望将它作为手机/电脑壁纸,又或者作为一些设计素材,但是可能图片的像素尺寸太低,导致设置为壁纸不清晰。所以你可能会找图片无损放大的工具。

目前比较流行的图片无损放大软件例如有:waifu2x、PhotoZoom、Topaz A.I. Gigapixel 等软件。无损放大的原理,一方面是通过 AI 的学习来放大图片像素,另一种是通过软件本身的算法。

今天给大家分享网友基于腾讯 ARC Lab 的 Real-ESRGAN 模型而开发的 AI 图片无损放大软件。据作者介绍,目前这个模型主要来源人像,对人像图片放大会有一个不错的效果,特别是动漫的图片。

AI无损放大工具介绍

AI 无损放大工具已包含最新 AI 引擎,解压直接运行就能使用。系统方面支持 Windows 7 或以上系统,需要安装 net framework 4.6 框架。

在设置里面,你可以看到引擎核心和模块,如果你需要输出指定目录,也可以先设置。

接着打开你需要无损放大的图片,格式支持 PNG\JPG\BMP,支持批量图片自动无损放大,然后点击开始任务,耐心等待一会。

锋哥随便找的一张图,分辨率为 483 x 586,无损放大生成后的尺寸为 1720 x 2344 大小。通过放大对比细节,可以看到效果还是很不错的。

功能特色

    支持多线程处理
    支持批量图片处理
    支持设置选项
    支持自定义输出格式和自定义输出路径
    支持AI引擎选择
    支持批量清理任务

总结

对于急需无损图片放大功能的用户来说,这款工具还是不错的,使用简单方便,唯一不足就是不能手动设置图片无损放大的尺寸。但是好在作者还开源了代码,动手能力强的小伙伴可以自己二次开发、研究学习。

    下载:
    https://xia1ge.lanzoui.com/iGfVuutpeve
    项目地址:
    https://github.com/X-Lucifer/AI-Lossless-Zoomer
    Real-ESRGAN:
    https://github.com/xinntao/Real-ESRGAN

--------------------------------------------------------

Real ESRGAN- 一个旨在开发恢复通用图像的实用算法,可以用来对低分辨率图片完成四倍放大和修复,化腐朽为神奇.

在线体验  |中文文档  | 安卓APP 

不要被算法吓到了,这是一个开箱即用的图像/视频修复程序,你可以直接在文档页面找到 Windows版 / Linux版 / macOS版的程序下载地址

项目提供了 5 个模型供你使用,你可以分别对照片、动漫插画、动漫视频等进行修复.

 

 

No comments:

Post a Comment