Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Superset | 52,260 | 2 | 12 hours ago | 3 | April 29, 2022 | 1,337 | apache-2.0 | TypeScript | ||
Apache Superset is a Data Visualization and Data Exploration Platform | ||||||||||
Tidb | 34,144 | 68 | 101 | 11 hours ago | 1,289 | April 07, 2022 | 4,015 | apache-2.0 | Go | |
TiDB is an open-source, cloud-native, distributed, MySQL-Compatible database for elastic scale and real-time analytics. Try AI-powered Chat2Query free at : https://tidbcloud.com/free-trial | ||||||||||
Metabase | 32,600 | 10 hours ago | 1 | June 08, 2022 | 3,032 | other | Clojure | |||
The simplest, fastest way to get business intelligence and analytics to everyone in your company :yum: | ||||||||||
Dbeaver | 32,251 | 11 hours ago | 1,744 | apache-2.0 | Java | |||||
Free universal database tool and SQL client | ||||||||||
Cockroach | 27,223 | 50 | 22 | 10 hours ago | 248 | August 06, 2021 | 6,642 | other | Go | |
CockroachDB - the open source, cloud-native distributed SQL database. | ||||||||||
Sqlmap | 27,189 | 17 hours ago | 1 | February 27, 2018 | 60 | other | Python | |||
Automatic SQL injection and database takeover tool | ||||||||||
Directus | 21,752 | 50 | 11 hours ago | 55 | September 22, 2022 | 228 | other | TypeScript | ||
The Modern Data Stack 🐰 — Directus is an instant REST+GraphQL API and intuitive no-code data collaboration app for any SQL database. | ||||||||||
Tdengine | 21,402 | 1 | 10 hours ago | 12 | April 14, 2022 | 1,022 | agpl-3.0 | C | ||
TDengine is an open source, high-performance, cloud native time-series database optimized for Internet of Things (IoT), Connected Cars, Industrial IoT and DevOps. | ||||||||||
Surrealdb | 20,635 | 10 hours ago | 35 | December 14, 2021 | 220 | other | Rust | |||
A scalable, distributed, collaborative, document-graph database, for the realtime web | ||||||||||
Postgrest | 20,612 | 4 | 20 hours ago | 37 | July 12, 2022 | 209 | mit | Haskell | ||
REST API for any Postgres database |
datafreeze
creates static extracts of SQL databases for use in interactive
web applications. SQL databases are a great way to manage relational data, but
exposing them on the web to drive data apps can be cumbersome. Often, the
capacities of a proper database are not actually required, a few static JSON
files and a bit of JavaScript can have the same effect. Still, exporting JSON
by hand (or with a custom script) can also become a messy process.
With datafreeze
, exports are scripted in a Makefile-like description,
making them simple to repeat and replicate.
The easiest way to install datafreeze
is to retrieve it from the Python
package index using pip
:
pip install datafreeze
Calling DataFreeze is simple, the application is called with a freeze file as its argument:
datafreeze Freezefile.yaml
Freeze files can be either written in JSON or in YAML. The database URI indicated in the Freezefile can also be overridden via the command line:
datafreeze --db sqlite:///foo.db Freezefile.yaml
A freeze file is composed of a set of scripted queries and specifications on how their output is to be handled. An example could look like this:
common:
database: "postgresql://user:[email protected]/operational_database"
prefix: my_project/dumps/
format: json
exports:
- query: "SELECT id, title, date FROM events"
filename: "index.json"
- query: "SELECT id, title, date, country FROM events"
filename: "countries/{{country}}.csv"
format: csv
- query: "SELECT * FROM events"
filename: "events/{{id}}.json"
mode: item
- query: "SELECT * FROM events"
filename: "all.json"
format: tabson
An identical JSON configuration can be found in this repository.
The freeze file has two main sections, common
and exports
. Both
accept many of the same arguments, with exports
specifying a list of
exports while common
defines some shared properties, such as the
database connection string.
The following options are recognized:
database
is a database URI, including the database type, username
and password, hostname and database name. Valid database types include
sqlite
, mysql
and postgresql
(requires psycopg2).prefix
specifies a common root directory for all extracted files.format
identifies the format to be generated, csv
, json
and
tabson
are supported. tabson
is a condensed JSON
representation in which rows are not represented by objects but by
lists of values.query
needs to be a valid SQL statement. All selected fields will
become keys or columns in the output, so it may make sense to define
proper aliases if any overlap is to be expected.mode
specifies whether the query output is to be combined into a
single file (list
) or whether a file should be generated for each
result row (item
).filename
is the output file name, appended to prefix
. All
occurences of {{field}}
are expanded to a fields value to allow the
generation of file names e.g. by primary key. In list mode, templating
can be used to group records into several buckets, e.g. by country or
category.wrap
can be used to specify whether the output should be wrapped
in a results
hash in JSON output. This defaults to true
for
list
-mode output and false
for item
-mode.dataset
is written and maintained by Friedrich Lindenberg, Gregor Aisch and
Stefan Wehrmeyer. We're standing on the
shoulders of giants.