Total Pageviews

Saturday, 26 October 2024

Deepfake Offensive Toolkit


stars license Python 3.8 build-dot code-check

dot (aka Deepfake Offensive Toolkit) makes real-time, controllable deepfakes ready for virtual cameras injection. dot is created for performing penetration testing against e.g. identity verification and video conferencing systems, for the use by security analysts, Red Team members, and biometrics researchers.

If you want to learn more about dot is used for penetration tests with deepfakes in the industry, read these articles by The Verge and Biometric Update.

dot is developed for research and demonstration purposes. As an end user, you have the responsibility to obey all applicable laws when using this program. Authors and contributing developers assume no liability and are not responsible for any misuse or damage caused by the use of this program.

How it works

In a nutshell, dot works like this:



All deepfakes supported by dot do not require additional training. They can be used in real-time on the fly on a photo that becomes the target of face impersonation. Supported methods:

  • face swap (via SimSwap), at resolutions 224 and 512
    • with the option of face superresolution (via GPen) at resolutions 256 and 512
  • lower quality face swap (via OpenCV)
  • FOMM, First Order Motion Model for image animation

Running dot

Graphical interface

GUI Installation

Download and run the dot executable for your OS:

  • Windows (Tested on Windows 10 and 11):

    • Download dot.zip from here, unzip it and then run dot.exe
  • Ubuntu:

    • ToDo
  • Mac (Tested on Apple M2 Sonoma 14.0):

    • Download dot-m2.zip from here and unzip it
    • Open terminal and run xattr -cr dot-executable.app to remove any extended attributes
    • In case of camera reading error:
      • Right click and choose Show Package Contents
      • Execute dot-executable from Contents/MacOS folder

GUI Usage

Usage example:

  1. Specify the source image in the field source.
  2. Specify the camera id number in the field target. In most cases, 0 is the correct camera id.
  3. Specify the config file in the field config_file. Select a default configuration from the dropdown list or use a custom file.
  4. (Optional) Check the field use_gpu to use the GPU.
  5. Click on the RUN button to start the deepfake.

For more information about each field, click on the menu Help/Usage.

Watch the following demo video for better understanding of the interface

Command Line

CLI Installation

Install Pre-requisites
  • Linux

    sudo apt install ffmpeg cmake
  • MacOS

    brew install ffmpeg cmake
    • Windows

      1. Download and install Visual Studio Community from here
      2. Install Desktop development with C++ from the Visual studio installer
    Create Conda Environment

    The instructions assumes that you have Miniconda installed on your machine. If you don't, you can refer to this link for installation instructions.

    With GPU Support
    conda env create -f envs/environment-gpu.yaml
    conda activate dot

    Install the torch and torchvision dependencies based on the CUDA version installed on your machine:

    • Install CUDA 11.8 from link

    • Install cudatoolkit from conda: conda install cudatoolkit=<cuda_version_no> (replace <cuda_version_no> with the version on your machine)

    • Install torch and torchvision dependencies: pip install torch==2.0.1+<cuda_tag> torchvision==0.15.2+<cuda_tag> torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118, where <cuda_tag> is the CUDA tag defined by Pytorch. For example, pip install torch==2.0.1+cu118 torchvision==0.15.2+cu118 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118 for CUDA 11.8.

      Note: torch1.9.0+cu111 can also be used.

    To check that torch and torchvision are installed correctly, run the following command: python -c "import torch; print(torch.cuda.is_available())". If the output is True, the dependencies are installed with CUDA support.

    With MPS Support(Apple Silicon)
    conda env create -f envs/environment-apple-m2.yaml
    conda activate dot

    To check that torch and torchvision are installed correctly, run the following command: python -c "import torch; print(torch.backends.mps.is_available())". If the output is True, the dependencies are installed with Metal programming framework support.

    With CPU Support (slow, not recommended)
    conda env create -f envs/environment-cpu.yaml
    conda activate dot
    Install dot
    pip install -e .
    Download Models
    • Download dot model checkpoints from here
    • Unzip the downloaded file in the root of this project

    CLI Usage

    Run dot --help to get a full list of available options.

    1. Simswap

      dot -c ./configs/simswap.yaml --target 0 --source "./data" --use_gpu

    SimSwapHQ

    dot -c ./configs/simswaphq.yaml --target 0 --source "./data" --use_gpu

    FOMM

    dot -c ./configs/fomm.yaml --target 0 --source "./data" --use_gpu

    FaceSwap CV2

    dot -c ./configs/faceswap_cv2.yaml --target 0 --source "./data" --use_gpu
    

    Note: To enable face superresolution, use the flag --gpen_type gpen_256 or --gpen_type gpen_512. To use dot on CPU (not recommended), do not pass the --use_gpu flag.

    Controlling dot with CLI

    Disclaimer: We use the SimSwap technique for the following demonstration

    Running dot via any of the above methods generates real-time Deepfake on the input video feed using source images from the data/ folder.

    When running dot a list of available control options appear on the terminal window as shown above. You can toggle through and select different source images by pressing the associated control key.

    Watch the following demo video for better understanding of the control options:

    Docker

    Setting up docker

  • Build the container

    docker-compose up --build -d
    
  • Access the container

    docker-compose exec dot "/bin/bash"
    

    Connect docker to the webcam

    Ubuntu

  • Build the container

    docker build -t dot -f Dockerfile .
    
  • Run the container

    xhost +
    docker run -ti --gpus all \
    -e NVIDIA_DRIVER_CAPABILITIES=compute,utility \
    -e NVIDIA_VISIBLE_DEVICES=all \
    -e PYTHONUNBUFFERED=1 \
    -e DISPLAY \
    -v .:/dot \
    -v /tmp/.X11-unix:/tmp/.X11-unix:rw \
    --runtime nvidia \
    --entrypoint /bin/bash \
    -p 8080:8080 \
    --device=/dev/video0:/dev/video0 \
    dot
    

    Windows

  • Follow the instructions here under Windows to set up the webcam with docker.

  • Build the container

    docker build -t dot -f Dockerfile .
    
  • Run the container

    docker run -ti --gpus all \
    -e NVIDIA_DRIVER_CAPABILITIES=compute,utility \
    -e NVIDIA_VISIBLE_DEVICES=all \
    -e PYTHONUNBUFFERED=1 \
    -e DISPLAY=192.168.99.1:0 \
    -v .:/dot \
    --runtime nvidia \
    --entrypoint /bin/bash \
    -p 8080:8080 \
    --device=/dev/video0:/dev/video0 \
    -v /tmp/.X11-unix:/tmp/.X11-unix \
    dot
    

    macOS

  • Follow the instructions here to set up the webcam with docker.

  • Build the container

    docker build -t dot -f Dockerfile .
    
  • Run the container

    docker run -ti --gpus all \
    -e NVIDIA_DRIVER_CAPABILITIES=compute,utility \
    -e NVIDIA_VISIBLE_DEVICES=all \
    -e PYTHONUNBUFFERED=1 \
    -e DISPLAY=$IP:0 \
    -v .:/dot \
    -v /tmp/.X11-unix:/tmp/.X11-unix \
    --runtime nvidia \
    --entrypoint /bin/bash \
    -p 8080:8080 \
    --device=/dev/video0:/dev/video0 \
    dot
    

    Virtual Camera Injection

    Instructions vary depending on your operating system.

    Windows

    • Install OBS Studio.

    • Run OBS Studio.

    • In the Sources section, press on Add button ("+" sign),

      select Windows Capture and press OK. In the appeared window, choose "[python.exe]: fomm" in Window drop-down menu and press OK. Then select Edit -> Transform -> Fit to screen.

    • In OBS Studio, go to Tools -> VirtualCam. Check AutoStart,

      set Buffered Frames to 0 and press Start.

    • Now OBS-Camera camera should be available in Zoom

      (or other videoconferencing software).

    Ubuntu

    sudo apt update
    sudo apt install v4l-utils v4l2loopback-dkms v4l2loopback-utils
    sudo modprobe v4l2loopback devices=1 card_label="OBS Cam" exclusive_caps=1
    v4l2-ctl --list-devices
    sudo add-apt-repository ppa:obsproject/obs-studio
    sudo apt install obs-studio

    Open OBS Studio and check if tools --> v4l2sink exists. If it doesn't follow these instructions:

    mkdir -p ~/.config/obs-studio/plugins/v4l2sink/bin/64bit/
    ln -s /usr/lib/obs-plugins/v4l2sink.so ~/.config/obs-studio/plugins/v4l2sink/bin/64bit/

    Use the virtual camera with OBS Studio:

    • Open OBS Studio
    • Go to tools --> v4l2sink
    • Select /dev/video2 and YUV420
    • Click on start
    • Join a meeting and select OBS Cam

    MacOS

    • Download and install OBS Studio for MacOS from here
    • Open OBS and follow the first-time setup (you might be required to enable certain permissions in System Preferences)
    • Run dot with --use_cam flag to enable camera feed
    • Click the "+" button in the sources section → select "Windows Capture", create a new source and enter "OK" → select window with "python" included in the name and enter OK
    • Click "Start Virtual Camera" button in the controls section
    • Select "OBS Cam" as default camera in the video settings of the application target of the injection

    Run dot with an Android emulator

    If you are performing a test against a mobile app, virtual cameras are much harder to inject. An alternative is to use mobile emulators and still resort to virtual camera injection.

    • Run dot. Check running dot for more information.

    • Run OBS Studio and set up the virtual camera. Check virtual-camera-injection for more information.

    • Download and Install Genymotion.

    • Open Genymotion and set up the Android emulator.

    • Set up dot with the Android emulator:

      • Open the Android emulator.
      • Click on camera and select OBS-Camera as front and back cameras. A preview of the dot window should appear. In case there is no preview, restart OBS and the emulator and try again. If that didn't work, use a different virtual camera software like e2eSoft VCam or ManyCam.
      • dot deepfake output should be now the emulator's phone camera.

    Speed

    With GPU

    Tested on a AMD Ryzen 5 2600 Six-Core Processor with one NVIDIA GeForce RTX 2070

    Simswap: FPS 13.0
    Simswap + gpen 256: FPS 7.0
    SimswapHQ: FPS 11.0
    FOMM: FPS 31.0
    

    With Apple Silicon

    Tested on Macbook Air M2 2022 16GB

    Simswap: FPS 3.2
    Simswap + gpen 256: FPS 1.8
    SimswapHQ: FPS 2.7
    FOMM: FPS 2.0
    

    License

    This is not a commercial Sensity product, and it is distributed freely with no warranties

    The software is distributed under BSD 3-Clause. dot utilizes several open source libraries. If you use dot, make sure you agree with their licenses too. In particular, this codebase is built on top of the following research projects:

    Run dot on pre-recorded image and video files

    FAQ

    • dot is very slow and I can't run it in real time

    Make sure that you are running it on a GPU card by using the --use_gpu flag. CPU is not recommended. If you still find it too slow it may be because you running it on an old GPU model, with less than 8GB of RAM.

    • Does dot only work with a webcam feed or also with a pre-recorded video?

    You can use dot on a pre-recorded video file by these scripts or try it directly on Colab.

    from https://github.com/sensity-ai/dot

     

     

    No comments:

    Post a Comment