|Project Name||Stars||Downloads||Repos Using This||Packages Using This||Most Recent Commit||Total Releases||Latest Release||Open Issues||License||Language|
|Duckdb||10,537||36||12 hours ago||1,472||July 07, 2022||556||mit||C++|
|DuckDB is an in-process SQL OLAP Database Management System|
|Doris||8,476||7 hours ago||1,718||apache-2.0||Java|
|Apache Doris is an easy-to-use, high performance and unified analytics database.|
|Databend||6,109||9 hours ago||630||other||Rust|
|A modern cloud data warehouse focusing on reducing cost and complexity for your massive-scale analytics needs. Open source alternative to Snowflake. Also available in the cloud: https://app.databend.com 🧠|
|Starrocks||4,664||9 hours ago||942||apache-2.0||Java|
|StarRocks is a next-gen sub-second MPP database for full analytics scenarios, including multi-dimensional analytics, real-time analytics and ad-hoc query.|
|Crate||3,692||4||1||a day ago||13||October 25, 2016||249||apache-2.0||Java|
|CrateDB is a distributed SQL database for storing and analyzing massive amounts of data in real-time. Built on top of Lucene.|
|Heavydb||2,792||4||4 months ago||7||September 02, 2021||262||apache-2.0||C++|
|HeavyDB (formerly OmniSciDB)|
|Matrixone||1,492||9 hours ago||5||July 14, 2022||493||apache-2.0||Go|
|Hyperconverged cloud-edge native database|
|Radon||1,466||2 years ago||17||February 24, 2021||2||gpl-3.0||Go|
|RadonDB is an open source, cloud-native MySQL database for building global, scalable cloud services|
|Risinglight||1,212||a month ago||2||April 04, 2022||48||apache-2.0||Rust|
|An OLAP database system for educational purpose|
|Awesome Graph||991||a month ago||8|
|A curated list of resources for graph databases and graph computing tools|
DuckDB is a high-performance analytical database system. It is designed to be fast, reliable and easy to use. DuckDB provides a rich SQL dialect, with support far beyond basic SQL. DuckDB supports arbitrary and nested correlated subqueries, window functions, collations, complex types (arrays, structs), and more. For more information on the goals of DuckDB, please refer to the Why DuckDB page on our website.
If you want to install and use DuckDB, please see our website for installation and usage instructions.
For CSV files and Parquet files, data import is as simple as referencing the file in the FROM clause:
SELECT * FROM 'myfile.csv'; SELECT * FROM 'myfile.parquet';
Refer to our Data Import section for more information.
The website contains a reference of functions and SQL constructs available in DuckDB.
For development, DuckDB requires CMake, Python3 and a
C++11 compliant compiler. Run
make in the root directory to compile the sources. For development, use
make debug to build a non-optimized debug version. You should run
make unit and
make allunit to verify that your version works properly after making changes. To test performance, you can run
BUILD_BENCHMARK=1 BUILD_TPCH=1 make and then perform several standard benchmarks from the root directory by executing
./build/release/benchmark/benchmark_runner. The detail of benchmarks is in our Benchmark Guide.
Please also refer to our Contribution Guide.