A Practical Tutorial for FakeApp

You can read all the posts in this series here:

If you are interested in reading more about AI Art (Stable Diffusion, Midjourney, etc) you can check this article instead: The Rise of AI Art.

Introduction

As explained in the first lecture of this course, An Introduction to DeepFakes and Face-Swap Technology, creating a deepfakes requires three steps: extraction, training and creation.

Step 1. Extraction

To train your model, FakeApp needs a large datasets of images. Unless you have hundreds of pictures already selected, FakeApp comes with a handy feature that allows to extract all frames from a video. This can be done in the GET DATASET tab. All you need is to specify a link to an mp4 video. Clicking on EXTRACT will start the process.

If your original video is called movie.mp4, the frames will be extracted in a folder called dataset-video. Inside, there will be another folder called extracted which contains the aligned images ready to be used in the training process. You might also see a file called alignments.json, which indicates, for each aligned frame, its original position in the image from which it was extracted.

After the extraction process is done, the only thing you need is the extracted folder; you can delete all other files. Before proceeding to the next step, just make sure that the aligned faces are, indeed, aligned (picture below). The face detection fails fairly often, so expect some manual work to do. 

Ideally, what you need is a video of person A and a video of person B. You’ll then need to run the process twice, to get two folders. If you have multiple videos of the same person, extract all of them an merge the folders. Alternatively, you can attach the videos one after the other using Movie Maker, or an equivalent program.

Step 2. Training

In FakeApp, you can train your model from the TRAIN tab. Under Data A and Data B you need to copy the path of the extracted folders. As a convention, Data A is the folder extracted from the background video, and Data B contains the faces of the person you want to insert into the Data A video. The training process will convert the face of person A into person B. In reality, the neural network is working in both directions; it does not really matter which one you choose as A and which one you choose as B.

You will also need a folder for the model. If this is your first time training from person A to person B, you can use an empty folder. FakeApp will use it to store the parameters of the trained neural network.

The training settings need to be set up before starting this process. In red, below, are indicates the ones that refer to the training process. Nodes and Layers are used to configure the neural network; Batch Size is used to train it on a larger number of faces. The meaning of these parameters is explained in depth in another post.

If your GPU has less than 2GB of RAM, it is likely the highest settings you can run are:

You will have to adjust your settings depending on how much memory is available on your GPU. These are the recommended setting you should usually run, although this may vary based on your model.

Parameter2 GB8 GB
Batch Size16128
Nodes641024
Layers34

If you do not have enough memory, the process will fail.

Monitor the progress. While training, you will see a window that shows how well the neural network is performing. The GIF below shows three hours worth of training, using game developer Richard Franke and his alter ego Kitty Powers (with videos from Kitty Powers’ Matchmaker and Kitty Powers’ Love Life trailers) as person B and person A, respectively.

You can press Q at any time to stop the training process. To resume it, simply start it again using that same folder as model. FakeApp also shows a score which indicates the error committed while trying to reconstruct person A into B and person B into A. Values below 0.02 are usually considered acceptable.

📰 Ad Break

Step 3. Creation

The process of creating a video is very similar to the one in GET DATASET. You need to provide the path to an mp4 video, and the folder of your model. That is the folder that contains the files: encoder.h5, decoder_A.h5 and decoder_B.h5. You will also need to specify the target FPS.

Pressing CREATE will automatically:

  • Extract all the frames from the source video in the workdir-video folder,
  • Crop all faces and align them in the workdir-video/extracted folder,
  • Process each face using the trained model,
  • Merge the faces back into the original frame and store them in the workdir-video/merged folder,
  • Join all the frames to create the final video.

In the settings (below), there is an option to decide if you want person A to be converted to person B (A to B) or person B to person A (B to A).

The other options are used to merge the reconstructed face back into the frame. They will be discussed in details in a later post.

📰 Ad Break

Conclusion

You can read all the posts in this series here:

A special thanks goes to Christos Sfetsios and David King, who gave me access to the machine I have used to create the deepfakes used in this tutorial.

Comments

61 responses to “A Practical Tutorial for FakeApp”

  1. How do you use the “Image” option?

  2. No new files were created after extracting images from video….the process stopped and i got an error message.

  3. […] Part 4. A Practical Tutorial for FakeApp […]

  4. Lukas Cizek avatar
    Lukas Cizek

    Good day. I need advice. When installing the fakeapp, a white box will appear stating: Annoucements fakeapp 2.2.0! and so on. The frame does not want to disappear. What can I do to make him disappear and I can start trying to make videos. Thank you for answer.

  5. Fernando avatar

    I have the problem

    undefinedUsing GPU0 for processing
    Traceback (most recent call last):
    File “execute.py”, line 69, in
    File “train.py”, line 42, in main
    MemoryError
    [24628] Failed to execute script execute

    What can i do?

    1. thats exactly the same problem that i have… you solved it already?

  6. MAYUR avatar

    Hello sir,
    only frames split successfully process but frames not aligned
    please tell me

  7. Hi,
    I got some problems in ‘merge’ phase
    Could you tell me what is wrong?
    Here is the comments in log file.
    Thank you!

    undefinedffmpeg version git-2017-12-29-0c78b6a Copyright (c) 2000-2017 the FFmpeg developers
    built with gcc 7.2.0 (GCC)
    configuration: –enable-gpl –enable-version3 –enable-sdl2 –enable-bzlib –enable-fontconfig –enable-gnutls –enable-iconv –enable-libass –enable-libbluray –enable-libfreetype –enable-libmp3lame –enable-libopencore-amrnb –enable-libopencore-amrwb –enable-libopenjpeg –enable-libopus –enable-libshine –enable-libsnappy –enable-libsoxr –enable-libtheora –enable-libtwolame –enable-libvpx –enable-libwavpack –enable-libwebp –enable-libx264 –enable-libx265 –enable-libxml2 –enable-libzimg –enable-lzma –enable-zlib –enable-gmp –enable-libvidstab –enable-libvorbis –enable-libvo-amrwbenc –enable-libmysofa –enable-libspeex –enable-amf –enable-cuda –enable-cuvid –enable-d3d11va –enable-nvenc –enable-dxva2 –enable-avisynth –enable-libmfx
    libavutil 56. 7.100 / 56. 7.100
    libavcodec 58. 9.100 / 58. 9.100
    libavformat 58. 3.100 / 58. 3.100
    libavdevice 58. 0.100 / 58. 0.100
    libavfilter 7. 8.100 / 7. 8.100
    libswscale 5. 0.101 / 5. 0.101
    libswresample 3. 0.101 / 3. 0.101
    libpostproc 55. 0.100 / 55. 0.100
    [image2 demuxer @ 00000295fd1ba840] Unable to parse option value “” as video rate
    [image2 demuxer @ 00000295fd1ba840] Error setting option framerate to value .
    D:\workdir-11\merged\out%d.png: Invalid argument

  8. Hi, here is the result:
    Reading config file from C://fakes//encoder.h5\config.p
    Model config file is missing or corrupted. Briefly rerun the Train tool with model to produce a new config file.
    Traceback (most recent call last):
    File “merge_faces.py”, line 59, in
    File “d:\anaconda\envs\merging\lib\site-packages\PyInstaller\loader\pyimod03_importers.py”, line 631, in exec_module
    File “model.py”, line 1, in
    ImportError: cannot import name ‘layers’
    [10788] Failed to execute script merge_faces
    Do you know what I did wrong ?

  9. Liviu Rusu avatar

    I have installed FakeApp and Cuda9.0 + pach 1-4 as you described it in the tutorial above, , downloaded cuDNN7.5 from the Nvidia provider extracted in the CUDA9 folder and when I run the program it’s extracted the photos from video into dataset folder but did not create extracted folder with aligned faces and the log file is showing this error ( undefined File “ctypes\__init__.py”, line 348, in __init__) Please help

  10. Kunal Goyal avatar
    Kunal Goyal

    Please tell me how to remove that message which shows on the screen after installing the software.

    1. When you do the software download, there a zip archive named “core”. Unzip it and put all the content in this place: “C:\Users\[username]\AppData\Local\FakeApp\app-2.2.0\resources\api”

      1. Lukas Cizek avatar
        Lukas Cizek

        What username do you think. My. If I enter it there, nothing will happen. I don’t know how to solve this problem. Thank you for answer.

    2. Lukas Cizek avatar
      Lukas Cizek

      I would also be interested. I have this problem too.

  11. White Tiger avatar
    White Tiger

    Will AMD Radeon TM 530 work, or only Nvidia ?

  12. Nikhil avatar

    I have an interesting question. Let’s say theres a video of 2 person sitting side by side. Can this technology completely replace one person in that video to another, let’s say myself using deepfakes?

    1. I have a similar question too. I´m thinking that the Fakeapp will change both faces. Just the faces. But i don´t know how this software works yet, because it didn´t work here.

  13. pandabaer123 avatar
    pandabaer123

    Hello, I have the same problem: “undefined File “align_faces.py”, line 136, in main”. Can You help me too?
    Thanks

    1. I have the same problem “undefined File “align_faces.py”, line 136, in main”.
      Can anyone help me?
      I just reduce video to 1280×720

  14. Liviu Rusu avatar
    Liviu Rusu

    Hi I’m getting this error on log after the app splits my video in 163 png images in dataset-video1 folder but in extracted subfolder ther are no images of the face “undefined File “align_faces.py”, line 136, in main
    File “align_faces.py”, line 136, in main” can you help me please?

  15. An error has occurred in the creation process. Check the end of the log.txt file deatails, and feel free to post on fakeapp.org/forum for help.

    Need Help here

  16. Hi, im using Fakeapp 2.2 but the program got error, dont do the align process:

    undefined File “ctypes\__init__.py”, line 348, in __init__
    [10720] Failed to execute script execute

    1. hamunaptra2010 avatar
      hamunaptra2010

      Hi Saud!

      Could you solve that error?

  17. Alfonso Martin avatar
    Alfonso Martin

    Hola tengo este problema después de extraer las imagenes:
    undefined File “align_faces.py”, line 136, in main
    File “align_faces.py”, line 103, in iter_face_alignments.
    Alguien me pueda ayudar con esto? , gracias.

    1. mismo problema, no hace el proceso.. lograste encontrar que funcione??? a mi me asoma como si hubiera error de libreria. ni idea

  18. ciao a me dice di istallare core librery volevo capire il perche potresti aiutarmi

  19. Hey Alan, Thanks for the tutorials. I am running fakeapp now. I extracted about 6000+ frames from a video and now it is initializing. It has been doing that for about 10 mins. Is this a normal part of the process? Thanks Again

  20. Hi, I stopped the training process when it was 0.03. I resumed it as you said: same folders again: model, data A, data B. Problem is that is not resuming but it’s starting from the beginning… Did I do something wrong?

  21. AHMET VELİ OLGUNDENİZ avatar
    AHMET VELİ OLGUNDENİZ

    My extracted file always being empty.. What am I doing wrong?

  22. SEBASTIAN avatar
    SEBASTIAN

    My pc is intel g2020 , 2.9ghz with 8 gb ram and my gpu is gt 1030 ,is this enough to get good results and how much time it take to make 2 minutes video

  23. Hi , can i replace the face with photos taken from internet instead of video , and how can i ignore other faces in video , help would be apreciated ,thanks.

    1. I’d suggest you using faceswap instead.
      It is more advanced, and has an option to decide which faces to change.

    2. Thnx for your great tutorial ..how we could save training and quit it. and how to pause and resame it

  24. Hello, I have encountered a strange problem.
    I’ll just copy the last line of the error message in the log i got:
    “FileNotFoundError: [Errno 2] No such file or directory: ‘C:\\Users\\User\\Downloads\\workdir-tyrion_short\\alignments.json’
    [9700] Failed to execute script execute”
    What confuses me is the abount of “\” there, also, the alignment file it’s looking for seems to be in a different directory. I’m sure I’m doing something wrong, have you encountered this before?

  25. erwan75 avatar

    Can we make this with only a picture, because it’s write in getdataset “picture” option. How does it work with it? Thank you

  26. Hi, great tutorial!
    a question, in your opinion, the error
    Loss A:
    Loss B:
    What could it be?
    I state that I have correctly processed faces A and B in the respective extracted folders
    Thank you very much

  27. Sam Zhu avatar

    undefined File “align_faces.py”, line 136, in main

    1. Quách Chí Giang avatar
      Quách Chí Giang

      downscale 720p.

  28. Akxoloto avatar
    Akxoloto

    Hi Alan,

    First thank for this tutorial !
    What’s your feeling about Openfaceswap ? Did you tiry it?

    Regards.

  29. Hi, I wonder if you can help with a poblem I’m having, I haven’t been able to get Fakeapp 2.2 (or any other deep fake) to work yet, when trying to train Fakeapp 2.2 it self terminates. Log.txt says this:
    “undefinedUsing GPU0 for processing
    Memory Limit: default Memory Growth: true
    ctions that this TensorFlow binary was not compiled to use: AVX AVX2
    2019-05-22 18:06:36.472569: I C:\tf_jenkins\workspace\re60 major: 5 minor: 2 memoryClockRate(GHz): 1.304
    pciBusID: 0000:01:00.0
    totalMemory: 4.00GiB freeMemory: 3.33GiB
    2019-05-22 18:06:36.473356: I C:\tf_jenkins\Flow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 960, pci bus id: 0000:01:00.0, compute capability: 5.2)
    Focus the training preview window and press ‘q’ to stop training and save the modelperformance gains if more memory is available.
    2019-05-22 18:07:59.455476: W C:\tf_jebfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.20GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
    2019-05-22 1d be performance gains if more memory is available.
    2019-05-22 18:07:59.896093: W C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\3tensorflow\cor6\e\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.56GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
    2019-05-22 18:08:00.392187: W C:\tf_jenkins\worke\bfc_allocator.cc:217] Allocator (GPU_0_ybfc) ran out of memor trying to allocate 1.09GiB. The caller indicates that this is not a failure, but may mean that there could beerformance gains if more memory is available.
    p2019-05-22 18:08:00.473250: W C: Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.55GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
    2019-05-22 18:08:00.576086: W C:\t that this is not a failure, but may mean that there could be performance gains if more memory is available.
    2019-05-22 18:08:00.651227: W C:\tf_jee\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.33GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
    2019-05-22 18:08:00.849691: W C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.14GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
    2019-05-22 18:08:00.954797: W C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.28GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
    Loss A: 0.20434043Loss B: 0.19918257Traceback (most recent call last):
    File “execute.py”, line 69, in
    File “train.py”, line 65, in main
    FileNotFoundError: Model directory not found.
    [15008] Failed to execute script execute”

    What am I missing?

    1. Kevin avatar

      Your GPU is only 2 GB. You need to lower the default settings. Set batch size to 16, nodes to 64 and layers to 3. Good luck!

  30. Sven59 avatar

    I can’t seem to get past the first bit with the sign that says Announcments reload the app, which I thnk I have done but not sure

  31. Hey Alan! Great demo. How long did that video take?

  32. Hi, I can’t extract images from the video. Always the same error: “An error has occurred in the creation process. Check the end of the log.txt file for details”.

    1. Hi Ale,
      I am not the creator of Fakeapp and unfortunately I cannot help you with this!

  33. Why i don’t get “extracted” file after extracting
    I only get a hundred photos

    And the second try .. it is always eror until now

    Can you help me ?

    I’m using Windos 8

  34. Question avatar

    Hi,
    Can you tell me wich nvidia card are you using? (the specific model)
    I tried with a Asus Dual GTX 1060, and did not allowed me to install the cuda developer drivers.

    Thanks.

    1. Hi!

      I have used several NVIDIA graphics cards.
      You can check on this list to see if yours is:
      https://developer.nvidia.com/cuda-gpus
      Although I believe it should be.

      If you were unable to install the CUDA drivers, you might have to check which error message you got exactly and investigate that, as there are so many possible causes!

  35. undefined File “d:\anaconda\envs\fakeapp\lib\site-packages\PyInstaller\loader\pyimod03_importers.py”, line 631, in exec_module
    File “align_faces.py”, line 7, in
    File “C:\Users\jb nov\AppData\Local\FakeApp\app-2.2.0\resources\api\torch\__init__.py”, line 76, in
    from torch._C import *
    ImportError: DLL load failed: The specified module could not be found.
    [7320] Failed to execute script execute====the above is log file ,your opnion please

    1. https://archive.org/details/FakeApp

      Plz see the full article for trouble shootings

  36. hi
    during doing “train” i got a message “training process ended .if you did not end it yourself ,an error occurred .check the end of the log .txt file for details, and feel free to post it on fakeapp.org/forumfor help

    1. Hi jabbar,

      Thank you for your message but this is not the fakeapp forum.
      I have not made the software, so I won’t be able to help you with this, unfortunately.

  37. Hello. I’m trying to create deepfakes of a pictures where there are more than one face inside. In this scenario,things going to be more complicated. Do you have some suggestion to give me ? Do you plan to write a tutorial about how to do what ? thanks.

    1. Hi Mario!
      If you are using faceswap, you can provide an image of the face you want to use. This will force the algorithm to ignore all the others, although is not always super accurate.

      If you are not using faceswap or that feature is not available, I would suggest editing the original video to cover all the other faces with a black box. Process the video like that, and then just merge it with the original one so that you can crop only the parts of the videos where the swapped face is present.

      Also, I hope you have full consent from all the people involved in the video!
      Don’t forget that!

  38. Hey Alan! Thank you so so much for these tutorials. Really really comprehensive and helpful. One note, when I tried pausing and restarting training on 2.2, it looks like it totally reset my model. Losses went back up after 10+ hrs processing, and when I went to create the video, the faces were blank. Did I do something wrong? Thanks a lot!

  39. Richard Button avatar
    Richard Button

    Richard
    Hi I am having a PC built dedicated to run Fakeapps 2.1 or 2.2 can you advise me what additional software would be required and in what order they should be installed and the procedure how to set it all up.

  40. Thanks for great tutorial!

    I used FakeApp 1.1 to train a model from scratch to under 0.009 loss, and the previews look great, but the merged (extracted) results look not so great and very different, screenshots below:

    Loss: https://i.imgur.com/QwJO4tG.png
    Previews: https://i.imgur.com/xz65Fus.png
    Converted (zoomed in): https://i.imgur.com/vulbdt0.png

    The above converted result (merged folder) was the best I could get, using the following settings: seamless false, blur size 0, and kernel size 0.

    A previous test with another model which was also built from scratch to under 0.01 loss gave an even worse result, were the previews while training looked amazing once again, but once converted did not even look like a face but rather like a distorted nightmare stain.

    Any idea on what I’m doing wrong?

    1. Hi Mathieu,

      I suspect you are creating deepfakes using porn actresses. I have been talking about that is a rather bad idea in an earlier post in the series.
      So yes, that is definitely one thing that is wrong. 🙂

  41. wow! its very useful tutorial for fakeapp user. I was so excited to install Fakeapp, but I am so sad because my laptop has Intel Graphic Card. Any Way thank you for the article.
    Dal Saru

    1. Hi!

      As far as I know, you do need an NVIDIA graphics card able to run CUDA to get reasonable performance.

Leave a Reply

Your email address will not be published. Required fields are marked *