(Version 0.3, Last Update 10-03-2017)
The Project follow the below index:
This Repository is my Master Thesis Project: "Develop a Video Object Tracking with Tensorflow Technology" and it's still developing, so many updates will be made. In this work, I used the architecture and problem solving strategy of the Paper T-CNN(Arxiv), that won last year IMAGENET 2015 Teaser Challenge VID. So the whole script architecture will be made of several component in cascade:
Notice that the Still Image Detection component could be unique or decompose into two sub-component:
- First: determinate "Where" in the Frame;
- Second: determinate "What" in the Frame.
My project use many online tensorflow projects, as:
To install the script you only need to download the Repository. To Run the script you have to had installed:
All the Python library necessary could be installed easily trought pip install package-name. If you want to follow a guide to install the requirements here is the link for a tutorial I wrote for myself and for a course of Deep Learning at UPC.
You only look once (YOLO) is a state-of-the-art, real-time object detection system.## i.Setting Parameters This are the inline terminal argmunts taken from the script, most of them aren't required, only the video path must be specified when we call the script:
parser = argparse.ArgumentParser() parser.add_argument('--det_frames_folder', default='det_frames/', type=str) parser.add_argument('--det_result_folder', default='det_results/', type=str) parser.add_argument('--result_folder', default='summary_result/', type=str) parser.add_argument('--summary_file', default='results.txt', type=str) parser.add_argument('--output_name', default='output.mp4', type=str) parser.add_argument('--perc', default=5, type=int) parser.add_argument('--path_video', required=True, type=str)
Now you have to download the weights for YOLO and put them into /YOLO_DET_Alg/weights/.
For YOLO knowledge here you can find Original code(C implementation) & paper.
After Set the Parameters, we can proceed and run the script:
python VID_yolo.py --path_video video.mp4
You will see some Terminal Output like:
You will see a realtime frames output(like the one here below) and then finally all will be embedded into the Video Output( I uploaded the first two Test I've made in the folder /video_result, you can download them and take a look to the final result. The first one has problems in the frames order, this is why you will see so much flickering in the video image,the problem was then solved and in the second doesn't show frames flickering ):
This are the inline terminal argmunts taken from the script, most of them aren't required. As before, only the video path must be specified when we call the script:
parser.add_argument('--output_name', default='output.mp4', type=str) parser.add_argument('--hypes', default='./hypes/overfeat_rezoom.json', type=str) parser.add_argument('--weights', default='./output/save.ckpt-1090000', type=str) parser.add_argument('--perc', default=2, type=int) parser.add_argument('--path_video', required=True, type=str)
I will soon put a weight file to download. For train and spec on the multiclass implementation I will add them after the end of my thesis project.
Download the .zip files linked in the Download section and replace the folders.
Then, after set the parameters, we can proceed and run the script:
python VID_tensorbox_multi_class.py --path_video video.mp4
In the folder video_result_OVT you can find files result of the runs of the VID TENSOBOX scripts.
All the scripts below are for the VID classes so if you wonna adapt them for other you have to simply change the Classes.py file where are defined the correspondencies between codes and names. All the data on the image are made respect a specific Image Ratio, because TENSORBOX works only with 640x480 PNG images, you will have to change the code a little to adapt to your needs. I will provide four scripts:
I've also add some file scripts to pre process and prepare the dataset to train the last component, the Inception Model, you can find them in a subfolder of the dataset script folder.
According to the LICENSE file of the original code,
Here below the links of the weights file for Inception and Tensorbox from my retraining experiments:
Thanks to Professors: