in Machine Learning, Tutorial

A Practical Tutorial for FakeApp

You can read all the posts in this series here:


As explained in the first lecture of this course, An Introduction to DeepFakes and Face-Swap Technology, creating a deepfakes requires three steps: extraction, training and creation.

Step 1. Extraction

To train your model, FakeApp needs a large datasets of images. Unless you have hundreds of pictures already selected, FakeApp comes with a handy feature that allows to extract all frames from a video. This can be done in the GET DATASET tab. All you need is to specify a link to an mp4 video. Clicking on EXTRACT will start the process.

If your original video is called movie.mp4, the frames will be extracted in a folder called dataset-video. Inside, there will be another folder called extracted which contains the aligned images ready to be used in the training process. You might also see a file called alignments.json, which indicates, for each aligned frame, its original position in the image from which it was extracted.

After the extraction process is done, the only thing you need is the extracted folder; you can delete all other files. Before proceeding to the next step, just make sure that the aligned faces are, indeed, aligned (picture below). The face detection fails fairly often, so expect some manual work to do. 

Ideally, what you need is a video of person A and a video of person B. You’ll then need to run the process twice, to get two folders. If you have multiple videos of the same person, extract all of them an merge the folders. Alternatively, you can attach the videos one after the other using Movie Maker, or an equivalent program.

Step 2. Training

In FakeApp, you can train your model from the TRAIN tab. Under Data A and Data B you need to copy the path of the extracted folders. As a convention, Data A is the folder extracted from the background video, and Data B contains the faces of the person you want to insert into the Data A video. The training process will convert the face of person A into person B. In reality, the neural network is working in both directions; it does not really matter which one you choose as A and which one you choose as B.

You will also need a folder for the model. If this is your first time training from person A to person B, you can use an empty folder. FakeApp will use it to store the parameters of the trained neural network.

The training settings need to be set up before starting this process. In red, below, are indicates the ones that refer to the training process. Nodes and Layers are used to configure the neural network; Batch Size is used to train it on a larger number of faces. The meaning of these parameters is explained in depth in another post.

If your GPU has less than 2GB of RAM, it is likely the highest settings you can run are:

You will have to adjust your settings depending on how much memory is available on your GPU. These are the recommended setting you should usually run, although this may vary based on your model.

Parameter 2 GB 8 GB
Batch Size 16 128
Nodes 64 1024
Layers 3 4

If you do not have enough memory, the process will fail.

Monitor the progress. While training, you will see a window that shows how well the neural network is performing. The GIF below shows three hours worth of training, using game developer Richard Franke and his alter ego Kitty Powers (with videos from Kitty Powers’ Matchmaker and Kitty Powers’ Love Life trailers) as person B and person A, respectively.

You can press Q at any time to stop the training process. To resume it, simply start it again using that same folder as model. FakeApp also shows a score which indicates the error committed while trying to reconstruct person A into B and person B into A. Values below 0.02 are usually considered acceptable.

Step 3. Creation

The process of creating a video is very similar to the one in GET DATASET. You need to provide the path to an mp4 video, and the folder of your model. That is the folder that contains the files: encoder.h5, decoder_A.h5 and decoder_B.h5. You will also need to specify the target FPS.

Pressing CREATE will automatically:

  • Extract all the frames from the source video in the workdir-video folder,
  • Crop all faces and align them in the workdir-video/extracted folder,
  • Process each face using the trained model,
  • Merge the faces back into the original frame and store them in the workdir-video/merged folder,
  • Join all the frames to create the final video.

In the settings (below), there is an option to decide if you want person A to be converted to person B (A to B) or person B to person A (B to A).

The other options are used to merge the reconstructed face back into the frame. They will be discussed in details in a later post.


You can read all the posts in this series here:

A special thanks goes to Christos Sfetsios and David King, who gave me access to the machine I have used to create the deepfakes used in this tutorial.


💖 Support this blog

This websites exists thanks to the contribution of patrons on Patreon. If you think these posts have either helped or inspired you, please consider supporting this blog.


📧 Stay updated

You will be notified when a new tutorial is relesed!

📝 Licensing

You are free to use, adapt and build upon this tutorial for your own projects (even commercially) as long as you credit me.

You are not allowed to redistribute the content of this tutorial on other platforms. Especially the parts that are only available on Patreon.

If the knowledge you have gained had a significant impact on your project, a mention in the credit would be very appreciated. ❤️🧔🏻

Write a Comment



  1. wow! its very useful tutorial for fakeapp user. I was so excited to install Fakeapp, but I am so sad because my laptop has Intel Graphic Card. Any Way thank you for the article.
    Dal Saru

  2. Thanks for great tutorial!

    I used FakeApp 1.1 to train a model from scratch to under 0.009 loss, and the previews look great, but the merged (extracted) results look not so great and very different, screenshots below:

    Converted (zoomed in):

    The above converted result (merged folder) was the best I could get, using the following settings: seamless false, blur size 0, and kernel size 0.

    A previous test with another model which was also built from scratch to under 0.01 loss gave an even worse result, were the previews while training looked amazing once again, but once converted did not even look like a face but rather like a distorted nightmare stain.

    Any idea on what I’m doing wrong?

    • Hi Mathieu,

      I suspect you are creating deepfakes using porn actresses. I have been talking about that is a rather bad idea in an earlier post in the series.
      So yes, that is definitely one thing that is wrong. 🙂

  3. Richard
    Hi I am having a PC built dedicated to run Fakeapps 2.1 or 2.2 can you advise me what additional software would be required and in what order they should be installed and the procedure how to set it all up.

  4. Hey Alan! Thank you so so much for these tutorials. Really really comprehensive and helpful. One note, when I tried pausing and restarting training on 2.2, it looks like it totally reset my model. Losses went back up after 10+ hrs processing, and when I went to create the video, the faces were blank. Did I do something wrong? Thanks a lot!

  5. Hello. I’m trying to create deepfakes of a pictures where there are more than one face inside. In this scenario,things going to be more complicated. Do you have some suggestion to give me ? Do you plan to write a tutorial about how to do what ? thanks.

    • Hi Mario!
      If you are using faceswap, you can provide an image of the face you want to use. This will force the algorithm to ignore all the others, although is not always super accurate.

      If you are not using faceswap or that feature is not available, I would suggest editing the original video to cover all the other faces with a black box. Process the video like that, and then just merge it with the original one so that you can crop only the parts of the videos where the swapped face is present.

      Also, I hope you have full consent from all the people involved in the video!
      Don’t forget that!

  6. hi
    during doing “train” i got a message “training process ended .if you did not end it yourself ,an error occurred .check the end of the log .txt file for details, and feel free to post it on help

  7. undefined File “d:\anaconda\envs\fakeapp\lib\site-packages\PyInstaller\loader\”, line 631, in exec_module
    File “”, line 7, in
    File “C:\Users\jb nov\AppData\Local\FakeApp\app-2.2.0\resources\api\torch\”, line 76, in
    from torch._C import *
    ImportError: DLL load failed: The specified module could not be found.
    [7320] Failed to execute script execute====the above is log file ,your opnion please

  8. Hi,
    Can you tell me wich nvidia card are you using? (the specific model)
    I tried with a Asus Dual GTX 1060, and did not allowed me to install the cuda developer drivers.


  9. Why i don’t get “extracted” file after extracting
    I only get a hundred photos

    And the second try .. it is always eror until now

    Can you help me ?

    I’m using Windos 8

  10. Hi, I can’t extract images from the video. Always the same error: “An error has occurred in the creation process. Check the end of the log.txt file for details”.

  11. I can’t seem to get past the first bit with the sign that says Announcments reload the app, which I thnk I have done but not sure

  12. Hi, I wonder if you can help with a poblem I’m having, I haven’t been able to get Fakeapp 2.2 (or any other deep fake) to work yet, when trying to train Fakeapp 2.2 it self terminates. Log.txt says this:
    “undefinedUsing GPU0 for processing
    Memory Limit: default Memory Growth: true
    ctions that this TensorFlow binary was not compiled to use: AVX AVX2
    2019-05-22 18:06:36.472569: I C:\tf_jenkins\workspace\re60 major: 5 minor: 2 memoryClockRate(GHz): 1.304
    pciBusID: 0000:01:00.0
    totalMemory: 4.00GiB freeMemory: 3.33GiB
    2019-05-22 18:06:36.473356: I C:\tf_jenkins\Flow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 960, pci bus id: 0000:01:00.0, compute capability: 5.2)
    Focus the training preview window and press ‘q’ to stop training and save the modelperformance gains if more memory is available.
    2019-05-22 18:07:59.455476: W C:\] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.20GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
    2019-05-22 1d be performance gains if more memory is available.
    2019-05-22 18:07:59.896093: W C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\3tensorflow\cor6\e\common_runtime\] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.56GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
    2019-05-22 18:08:00.392187: W C:\tf_jenkins\worke\] Allocator (GPU_0_ybfc) ran out of memor trying to allocate 1.09GiB. The caller indicates that this is not a failure, but may mean that there could beerformance gains if more memory is available.
    p2019-05-22 18:08:00.473250: W C: Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.55GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
    2019-05-22 18:08:00.576086: W C:\t that this is not a failure, but may mean that there could be performance gains if more memory is available.
    2019-05-22 18:08:00.651227: W C:\tf_jee\common_runtime\] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.33GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
    2019-05-22 18:08:00.849691: W C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.14GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
    2019-05-22 18:08:00.954797: W C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.28GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
    Loss A: 0.20434043Loss B: 0.19918257Traceback (most recent call last):
    File “”, line 69, in
    File “”, line 65, in main
    FileNotFoundError: Model directory not found.
    [15008] Failed to execute script execute”

    What am I missing?

    • Your GPU is only 2 GB. You need to lower the default settings. Set batch size to 16, nodes to 64 and layers to 3. Good luck!

  13. Hi, great tutorial!
    a question, in your opinion, the error
    Loss A:
    Loss B:
    What could it be?
    I state that I have correctly processed faces A and B in the respective extracted folders
    Thank you very much

  14. Can we make this with only a picture, because it’s write in getdataset “picture” option. How does it work with it? Thank you

  15. Hello, I have encountered a strange problem.
    I’ll just copy the last line of the error message in the log i got:
    “FileNotFoundError: [Errno 2] No such file or directory: ‘C:\\Users\\User\\Downloads\\workdir-tyrion_short\\alignments.json’
    [9700] Failed to execute script execute”
    What confuses me is the abount of “\” there, also, the alignment file it’s looking for seems to be in a different directory. I’m sure I’m doing something wrong, have you encountered this before?

  16. Hi , can i replace the face with photos taken from internet instead of video , and how can i ignore other faces in video , help would be apreciated ,thanks.

  17. My pc is intel g2020 , 2.9ghz with 8 gb ram and my gpu is gt 1030 ,is this enough to get good results and how much time it take to make 2 minutes video

  18. Hi, I stopped the training process when it was 0.03. I resumed it as you said: same folders again: model, data A, data B. Problem is that is not resuming but it’s starting from the beginning… Did I do something wrong?

  19. Hey Alan, Thanks for the tutorials. I am running fakeapp now. I extracted about 6000+ frames from a video and now it is initializing. It has been doing that for about 10 mins. Is this a normal part of the process? Thanks Again

  20. Hola tengo este problema después de extraer las imagenes:
    undefined File “”, line 136, in main
    File “”, line 103, in iter_face_alignments.
    Alguien me pueda ayudar con esto? , gracias.

    • mismo problema, no hace el proceso.. lograste encontrar que funcione??? a mi me asoma como si hubiera error de libreria. ni idea

  21. Hi, im using Fakeapp 2.2 but the program got error, dont do the align process:

    undefined File “ctypes\”, line 348, in __init__
    [10720] Failed to execute script execute

  22. An error has occurred in the creation process. Check the end of the log.txt file deatails, and feel free to post on for help.

    Need Help here

  23. Hi I’m getting this error on log after the app splits my video in 163 png images in dataset-video1 folder but in extracted subfolder ther are no images of the face “undefined File “”, line 136, in main
    File “”, line 136, in main” can you help me please?

    • I have the same problem “undefined File “”, line 136, in main”.
      Can anyone help me?
      I just reduce video to 1280×720

  24. I have an interesting question. Let’s say theres a video of 2 person sitting side by side. Can this technology completely replace one person in that video to another, let’s say myself using deepfakes?

    • I have a similar question too. I´m thinking that the Fakeapp will change both faces. Just the faces. But i don´t know how this software works yet, because it didn´t work here.

    • When you do the software download, there a zip archive named “core”. Unzip it and put all the content in this place: “C:\Users\[username]\AppData\Local\FakeApp\app-2.2.0\resources\api”

  25. I have installed FakeApp and Cuda9.0 + pach 1-4 as you described it in the tutorial above, , downloaded cuDNN7.5 from the Nvidia provider extracted in the CUDA9 folder and when I run the program it’s extracted the photos from video into dataset folder but did not create extracted folder with aligned faces and the log file is showing this error ( undefined File “ctypes\”, line 348, in __init__) Please help

  26. Hi, here is the result:
    Reading config file from C://fakes//encoder.h5\config.p
    Model config file is missing or corrupted. Briefly rerun the Train tool with model to produce a new config file.
    Traceback (most recent call last):
    File “”, line 59, in
    File “d:\anaconda\envs\merging\lib\site-packages\PyInstaller\loader\”, line 631, in exec_module
    File “”, line 1, in
    ImportError: cannot import name ‘layers’
    [10788] Failed to execute script merge_faces
    Do you know what I did wrong ?

  27. Hi,
    I got some problems in ‘merge’ phase
    Could you tell me what is wrong?
    Here is the comments in log file.
    Thank you!

    undefinedffmpeg version git-2017-12-29-0c78b6a Copyright (c) 2000-2017 the FFmpeg developers
    built with gcc 7.2.0 (GCC)
    configuration: –enable-gpl –enable-version3 –enable-sdl2 –enable-bzlib –enable-fontconfig –enable-gnutls –enable-iconv –enable-libass –enable-libbluray –enable-libfreetype –enable-libmp3lame –enable-libopencore-amrnb –enable-libopencore-amrwb –enable-libopenjpeg –enable-libopus –enable-libshine –enable-libsnappy –enable-libsoxr –enable-libtheora –enable-libtwolame –enable-libvpx –enable-libwavpack –enable-libwebp –enable-libx264 –enable-libx265 –enable-libxml2 –enable-libzimg –enable-lzma –enable-zlib –enable-gmp –enable-libvidstab –enable-libvorbis –enable-libvo-amrwbenc –enable-libmysofa –enable-libspeex –enable-amf –enable-cuda –enable-cuvid –enable-d3d11va –enable-nvenc –enable-dxva2 –enable-avisynth –enable-libmfx
    libavutil 56. 7.100 / 56. 7.100
    libavcodec 58. 9.100 / 58. 9.100
    libavformat 58. 3.100 / 58. 3.100
    libavdevice 58. 0.100 / 58. 0.100
    libavfilter 7. 8.100 / 7. 8.100
    libswscale 5. 0.101 / 5. 0.101
    libswresample 3. 0.101 / 3. 0.101
    libpostproc 55. 0.100 / 55. 0.100
    [image2 demuxer @ 00000295fd1ba840] Unable to parse option value “” as video rate
    [image2 demuxer @ 00000295fd1ba840] Error setting option framerate to value .
    D:\workdir-11\merged\out%d.png: Invalid argument