|Project Name||Stars||Downloads||Repos Using This||Packages Using This||Most Recent Commit||Total Releases||Latest Release||Open Issues||License||Language|
|Faldetector||1,416||3 months ago||25||apache-2.0||Python|
|Code for the paper: Detecting Photoshopped Faces by Scripting Photoshop|
|Focus Points||295||8 months ago||9||apache-2.0||Perl|
|Plugin for Lightroom to show which focus point was active in the camera when a photo was taken|
|Featuretransferapp||30||4 years ago||1||mit||Python|
|Image editing for people bad at photoshop|
|Fakeface||9||4 years ago||agpl-3.0||Python|
|Fake face classifier|
|Real And Fake Face Detection||5||4 years ago|
|Fake Face Photos by Photoshop Experts|
|Schnurrer||2||2 years ago||mit||Python|
|Automates the tedious process to photoshop moustaches into faces.|
Alexei A. Efros1.
UC Berkeley1, Adobe Research2.
In ICCV, 2019.
9/30/2019 Update The code and model weights have been updated to correspond to the v2 of our paper. Note that the global classifer architecture is changed from resnet-50 to drn-c-26.
1/19/2019 Update Dataset for evaluation is released! The link is here.
Welcome! Computer vision algorithms often work well on some images, but fail on others. Ours is like this too. We believe our work is a significant step forward in detecting and undoing facial warping by image editing tools. However, there are still many hard cases, and this is by no means a solved problem.
This is partly because our algorithm is trained on faces warped by the Face-aware Liquify tool in Photoshop, and will thus work well for these types of images, but not necessarily for others. We call this the "dataset bias" problem. Please see the paper for more details on this issue.
While we trained our models with various data augmentation to be more robust to downstream operations such as resizing, jpeg compression and saturation/brightness changes, there are many other retouches (e.g. airbrushing) that can alter the low-level statistics of the images to make the detection a really hard one.
Please enjoy our results and have fun trying out our models!
pip install -r requirements.txt
python global_classifier.py --input_path examples/modified.jpg --model_path weights/global.pth
python local_detector.py --input_path examples/modified.jpg --model_path weights/local.pth --dest_folder out/
Note: Our models are trained on faces cropped by the dlib CNN face detector. Although in both scripts we included the
--no_crop option to run the models without face crops, it is used for images with already cropped faces.
A validation set consisting of 500 original and 500 modified images each from Flickr and OpenImage can be downloaded here. Due to licensing issues, the released validation set is different from the set we evaluate in the paper, and the training set will not be released.
In the zip file, original faces are in the
original folder, and modified faces are in the
modified folder. For reference, the
reference folder contains the same faces in the
modified folder, but those are before modification (original).
To evaluate the dataset, run:
# Download the dataset cd data bash download_valset.sh cd .. # Run evaluation script. Model weights need to be downloaded. python eval.py --dataroot data --global_pth weights/global.pth --local_pth weights/local.pth --gpu_id 0
The following are the models' performances on the released set:
This repository borrows partially from the pytorch-CycleGAN-and-pix2pix, drn, and the PyTorch torchvision models repositories.
If you find this useful for your research, please consider citing this bibtex. Please contact Sheng-Yu Wang <sheng-yu_wang at berkeley dot edu> with any comments or feedback.