Multiview Human Pose Estimation Pytorch

This is an official Pytorch implementation of "Cross View Fusion for 3D Human Pose Estimation, ICCV 2019".
Alternatives To Multiview Human Pose Estimation Pytorch
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Psmoveapi441
a month ago10otherC
Cross-platform library for 6DoF tracking of the PS Move Motion Controller. Sensor fusion, computer vision, ambient display (LED orb).
Multiview Human Pose Estimation Pytorch388
2 years ago6mitPython
This is an official Pytorch implementation of "Cross View Fusion for 3D Human Pose Estimation, ICCV 2019".
Centerfusion348
8 months ago28mitPython
CenterFusion: Center-based Radar and Camera Fusion for 3D Object Detection
Visual Gps Slam263
18 days ago1gpl-3.0C++
This is a repo for my master thesis research about the Fusion of Visual SLAM and GPS. It contains the research paper, code and other interesting data.
Awesome 3d Detectors249
a year ago1
Paperlist of awesome 3D detection methods
Robot_pose_ekf187
2 years agobsd-3-clauseC++
The robot_pose_ekf ROS package applies sensor fusion on the robot IMU and odometry values to estimate its 3D pose.
Sketchmodeling103
2 years ago10otherC++
Source code for the Sketch Modeling project: reconstruct a 3D shape from line drawing sketches.
Kinectshape76
7 years agomitC++
Implementation of "kinect fusion" 3d shape reconstruction method
Epnet41
3 years agomitPython
EPNet: Enhancing Point Features with Image Semantics for 3D Object Detection(ECCV 2020)
Plen 3dmodel_fusion36039
8 months ago2other
PLEN2's 3D model data implemented by Autodesk Fusion 360.
Alternatives To Multiview Human Pose Estimation Pytorch
Select To Compare


Alternative Project Comparisons
Readme

This repo implements our ICCV paper "Cross View Fusion for 3D Human Pose Estimation" https://chunyuwang.netlify.com/img/ICCV_Cross_view_camera_ready.pdf

Quick start

Installation

  1. Clone this repo, and we'll call the directory that you cloned multiview-pose as ${POSE_ROOT}

  2. Install dependencies.

  3. Download pytorch imagenet pretrained models. Please download them under ${POSE_ROOT}/models, and make them look like this:

    ${POSE_ROOT}/models
    └── pytorch
        └── imagenet
            ├── resnet152-b121ed2d.pth
            ├── resnet50-19c8e357.pth
            └── mobilenet_v2.pth.tar
    

    They can be downloaded from the following link: https://onedrive.live.com/?authkey=%21AF9rKCBVlJ3Qzo8&id=93774C670BD4F835%21930&cid=93774C670BD4F835

  4. Init output(training model output directory) and log(tensorboard log directory) directory.

    mkdir ouput 
    mkdir log
    

    and your directory tree should like this

    ${POSE_ROOT}
    ├── data
    ├── experiments-local
    ├── experiments-philly
    ├── lib
    ├── log
    ├── models
    ├── output
    ├── pose_estimation
    ├── README.md
    ├── requirements.txt
    

Data preparation

For MPII data, please download from MPII Human Pose Dataset, the original annotation files are matlab's format. We have converted to json format, you also need download them from OneDrive. Extract them under {POSE_ROOT}/data, and make them look like this:

${POSE_ROOT}
|-- data
|-- |-- MPII
    |-- |-- annot
        |   |-- gt_valid.mat
        |   |-- test.json
        |   |-- train.json
        |   |-- trainval.json
        |   |-- valid.json
        |-- images
            |-- 000001163.jpg
            |-- 000003072.jpg

If you zip the image files into a single zip file, you should organize the data like this:

${POSE_ROOT}
|-- data
`-- |-- MPII
    `-- |-- annot
        |   |-- gt_valid.mat
        |   |-- test.json
        |   |-- train.json
        |   |-- trainval.json
        |   `-- valid.json
        `-- images.zip
            `-- images
                |-- 000001163.jpg
                |-- 000003072.jpg

For Human36M data, please follow CHUNYUWANG/H36M-Toolbox to prepare images and annotations, and make them look like this:

${POSE_ROOT}
|-- data
|-- |-- h36m
    |-- |-- annot
        |   |-- h36m_train.pkl
        |   |-- h36m_validation.pkl
        |-- images
            |-- s_01_act_02_subact_01_ca_01 
            |-- s_01_act_02_subact_01_ca_02

If you zip the image files into a single zip file, you should organize the data like this:

${POSE_ROOT}
|-- data
`-- |-- h36m
    `-- |-- annot
        |   |-- h36m_train.pkl
        |   |-- h36m_validation.pkl
        `-- images.zip
            `-- images
                |-- s_01_act_02_subact_01_ca_01
                |-- s_01_act_02_subact_01_ca_02

Limb length prior for 3D Pose Estimation, please download the limb length prior data from https://1drv.ms/u/s!AjX41AtnTHeTiQs7hDJ2sYoGJDEB?e=YyJcI4

put it in data/pict/pairwise.pkl

2D Training and Testing

Multiview Training on Mixed Dataset (MPII+H36M) and testing on H36M

python run/pose2d/train.py --cfg experiments-local/mixed/resnet50/256_fusion.yaml
python run/pose2d/valid.py --cfg experiments-local/mixed/resnet50/256_fusion.yaml

3D Testing

Multiview testing on H36M (based on CPU or GPU)

python run/pose3d/estimate.py --cfg experiments-local/mixed/resnet50/256_fusion.yaml (CPU Version)
python run/pose3d/estimate_cuda.py --cfg experiments-local/mixed/resnet50/256_fusion.yaml (GPU Version)

Citation

If you use our code or models in your research, please cite with:

@inproceedings{multiviewpose,
    author={Qiu, Haibo and Wang, Chunyu and Wang, Jingdong and Wang, Naiyan and Zeng, Wenjun},
    title={Cross View Fusion for 3D Human Pose Estimation},
    booktitle = {International Conference on Computer Vision (ICCV)},
    year = {2019}
}

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

The video demo is available here

PWC

Popular 3d Graphics Projects
Popular Fusion Projects
Popular Graphics Categories

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Python
Pytorch
3d
Fusion