Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Psmoveapi | 441 | a month ago | 10 | other | C | |||||
Cross-platform library for 6DoF tracking of the PS Move Motion Controller. Sensor fusion, computer vision, ambient display (LED orb). | ||||||||||
Multiview Human Pose Estimation Pytorch | 388 | 2 years ago | 6 | mit | Python | |||||
This is an official Pytorch implementation of "Cross View Fusion for 3D Human Pose Estimation, ICCV 2019". | ||||||||||
Centerfusion | 348 | 8 months ago | 28 | mit | Python | |||||
CenterFusion: Center-based Radar and Camera Fusion for 3D Object Detection | ||||||||||
Visual Gps Slam | 263 | 18 days ago | 1 | gpl-3.0 | C++ | |||||
This is a repo for my master thesis research about the Fusion of Visual SLAM and GPS. It contains the research paper, code and other interesting data. | ||||||||||
Awesome 3d Detectors | 249 | a year ago | 1 | |||||||
Paperlist of awesome 3D detection methods | ||||||||||
Robot_pose_ekf | 187 | 2 years ago | bsd-3-clause | C++ | ||||||
The robot_pose_ekf ROS package applies sensor fusion on the robot IMU and odometry values to estimate its 3D pose. | ||||||||||
Sketchmodeling | 103 | 2 years ago | 10 | other | C++ | |||||
Source code for the Sketch Modeling project: reconstruct a 3D shape from line drawing sketches. | ||||||||||
Kinectshape | 76 | 7 years ago | mit | C++ | ||||||
Implementation of "kinect fusion" 3d shape reconstruction method | ||||||||||
Epnet | 41 | 3 years ago | mit | Python | ||||||
EPNet: Enhancing Point Features with Image Semantics for 3D Object Detection(ECCV 2020) | ||||||||||
Plen 3dmodel_fusion360 | 39 | 8 months ago | 2 | other | ||||||
PLEN2's 3D model data implemented by Autodesk Fusion 360. |
This repo implements our ICCV paper "Cross View Fusion for 3D Human Pose Estimation" https://chunyuwang.netlify.com/img/ICCV_Cross_view_camera_ready.pdf
Clone this repo, and we'll call the directory that you cloned multiview-pose as ${POSE_ROOT}
Install dependencies.
Download pytorch imagenet pretrained models. Please download them under ${POSE_ROOT}/models, and make them look like this:
${POSE_ROOT}/models
└── pytorch
└── imagenet
├── resnet152-b121ed2d.pth
├── resnet50-19c8e357.pth
└── mobilenet_v2.pth.tar
They can be downloaded from the following link: https://onedrive.live.com/?authkey=%21AF9rKCBVlJ3Qzo8&id=93774C670BD4F835%21930&cid=93774C670BD4F835
Init output(training model output directory) and log(tensorboard log directory) directory.
mkdir ouput
mkdir log
and your directory tree should like this
${POSE_ROOT}
├── data
├── experiments-local
├── experiments-philly
├── lib
├── log
├── models
├── output
├── pose_estimation
├── README.md
├── requirements.txt
For MPII data, please download from MPII Human Pose Dataset, the original annotation files are matlab's format. We have converted to json format, you also need download them from OneDrive. Extract them under {POSE_ROOT}/data, and make them look like this:
${POSE_ROOT}
|-- data
|-- |-- MPII
|-- |-- annot
| |-- gt_valid.mat
| |-- test.json
| |-- train.json
| |-- trainval.json
| |-- valid.json
|-- images
|-- 000001163.jpg
|-- 000003072.jpg
If you zip the image files into a single zip file, you should organize the data like this:
${POSE_ROOT}
|-- data
`-- |-- MPII
`-- |-- annot
| |-- gt_valid.mat
| |-- test.json
| |-- train.json
| |-- trainval.json
| `-- valid.json
`-- images.zip
`-- images
|-- 000001163.jpg
|-- 000003072.jpg
For Human36M data, please follow CHUNYUWANG/H36M-Toolbox to prepare images and annotations, and make them look like this:
${POSE_ROOT}
|-- data
|-- |-- h36m
|-- |-- annot
| |-- h36m_train.pkl
| |-- h36m_validation.pkl
|-- images
|-- s_01_act_02_subact_01_ca_01
|-- s_01_act_02_subact_01_ca_02
If you zip the image files into a single zip file, you should organize the data like this:
${POSE_ROOT}
|-- data
`-- |-- h36m
`-- |-- annot
| |-- h36m_train.pkl
| |-- h36m_validation.pkl
`-- images.zip
`-- images
|-- s_01_act_02_subact_01_ca_01
|-- s_01_act_02_subact_01_ca_02
Limb length prior for 3D Pose Estimation, please download the limb length prior data from https://1drv.ms/u/s!AjX41AtnTHeTiQs7hDJ2sYoGJDEB?e=YyJcI4
put it in data/pict/pairwise.pkl
Multiview Training on Mixed Dataset (MPII+H36M) and testing on H36M
python run/pose2d/train.py --cfg experiments-local/mixed/resnet50/256_fusion.yaml
python run/pose2d/valid.py --cfg experiments-local/mixed/resnet50/256_fusion.yaml
Multiview testing on H36M (based on CPU or GPU)
python run/pose3d/estimate.py --cfg experiments-local/mixed/resnet50/256_fusion.yaml (CPU Version)
python run/pose3d/estimate_cuda.py --cfg experiments-local/mixed/resnet50/256_fusion.yaml (GPU Version)
If you use our code or models in your research, please cite with:
@inproceedings{multiviewpose,
author={Qiu, Haibo and Wang, Chunyu and Wang, Jingdong and Wang, Naiyan and Zeng, Wenjun},
title={Cross View Fusion for 3D Human Pose Estimation},
booktitle = {International Conference on Computer Vision (ICCV)},
year = {2019}
}
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.