We present parallel implementations of two tree based nearest neighbor search data structures: SG-Tree and Cover Tree.
The cover tree data structure was originally presented in and improved in:
SG-Tree is a new data structure for exact nearest neighbor search inspired from Cover Tree and its improvement, which has been used in the TerraPattern project. At a high level, SG-Tree tries to create a hierarchical tree where each node performs a "coarse" clustering. The centers of these "clusters" become the children and subsequent insertions are recursively performed on these children. When performing the NN query, we prune out solutions based on a subset of the dimensions that are being queried. This is particularly useful when trying to find the nearest neighbor in highly clustered subset of the data, e.g. when the data comes from a recursive mixture of Gaussians or more generally time marginalized coalscent process . The effect of these two optimizations is that our data structure is extremely simple, highly parallelizable and is comparable in performance to existing NN implementations on many data-sets.
Under active development
Just use python setup.py install
and then in python you can import nntree
. The python API details are provided in API.pdf
.
If you do not have root priveledges, install with python setup.py install --user
and make sure to have the folder in path.
src
within respective folderlib
folderscripts
data
is a placeholder folder where to put the databuild
and dist
folder will be created to hold the executablesWe will show how to run our Cover Tree on a single machine using synthetic dataset
First of all compile by hitting make
make
Generate synthetic dataset
python data/generateData.py
Run Cover Tree
dist/cover_tree data/train_100d_1000k_1000.dat data/test_100d_1000k_10.dat
The make file has some useful features:
if you have Intel® C++ Compiler, then you can instead
make intel
or if you want to use Intel® C++ Compiler's cross-file optimization (ipo), then hit
make inteltogether
Yet an other alternative is to use the LLVM/CLang compiler (minimal required version is 3.4, for c++14 support)
make llvm
For this to work under linux, you would probably have to install at least these packages (in version 3.4 or later): clang libc++-dev
Also you can selectively compile individual modules by specifying
make <module-name>
or clean individually by
make clean-<module-name>
Based on our evaluation the implementation is easily scalable and efficient. For example on Amazon EC2 c4.8xlarge, we could insert more than 1 million vectors of 1000 dimensions in Euclidean space with L2 norm under 250 seconds. During query time we can process > 300 queries per second per core.
If the build fails and throws error like "instruction not found", then most probably the system does not support AVX2 instruction sets. To solve this issue, in setup.py
and src/cover_tree/makefile
please change march=core-avx2
to march=corei7
.