|Project Name||Stars||Downloads||Repos Using This||Packages Using This||Most Recent Commit||Total Releases||Latest Release||Open Issues||License||Language|
|Neo4j Spark Connector||293||a month ago||14||September 06, 2022||25||apache-2.0||Scala|
|Neo4j Connector for Apache Spark, which provides bi-directional read/write access to Neo4j from Spark, using the Spark DataSource APIs|
|Amundsenmetadatalibrary||83||2 years ago||apache-2.0||Python|
|Metadata service library for Amundsen|
|Spark Neo4j||43||4 years ago||4||apache-2.0||Dockerfile|
|A single docker image that combines Neo4j Mazerunner and Apache Spark GraphX into a powerful all-in-one graph processing engine|
|Flink Neo4j||21||5 years ago||3||Java|
|Read Cypher query results into Apache Flink and write datasets to Neo4j using Cypher batches.|
|Sdfeater||7||9 days ago||mit||Java|
|Always hungry SDF chemical file format parser with many output formats|
|JumpMicro Microservices www.jumpmicro.com|
|Spark On Neo4j||4||6 years ago||apache-2.0||Scala|
|Run Apache Spark Worker as part (in JVM process) of Neo4j instance|
|Graffinity_server||4||3 years ago||mit||Python|
|Graffinity's server, includes sample Neo4j databases|
|Spark_social_network_analysis||3||5 years ago||R|
|Example of spark social network analysis using Spark|
|Gather Sling||2||12 years ago||Objective-J|
|Apache sling based web UI for GATHER.|
The Amundsen project moved to a monorepo. This repository will be kept up temporarily to allow users to transition gracefully, but new PRs won't be accepted.
Amundsen Metadata service serves Restful API and is responsible for providing and also updating metadata, such as table & column description, and tags. Metadata service can use Neo4j or Apache Atlas as a persistent layer.
For information about Amundsen and our other services, visit the main repository
README.md. Please also see our instructions for a quick start setup of Amundsen with dummy data, and an overview of the architecture.
$ venv_path=[path_for_virtual_environment] $ python3 -m venv $venv_path $ source $venv_path/bin/activate $ pip3 install amundsen-metadata $ python3 metadata_service/metadata_wsgi.py -- In a different terminal, verify getting HTTP/1.0 200 OK $ curl -v http://localhost:5002/healthcheck
$ git clone https://github.com/amundsen-io/amundsenmetadatalibrary.git $ cd amundsenmetadatalibrary $ python3 -m venv venv $ source venv/bin/activate $ pip3 install -r requirements.txt $ python3 setup.py install $ python3 metadata_service/metadata_wsgi.py -- In a different terminal, verify getting HTTP/1.0 200 OK $ curl -v http://localhost:5002/healthcheck
$ docker pull amundsendev/amundsen-metadata:latest $ docker run -p 5002:5002 amundsendev/amundsen-metadata # - alternative, for production environment with Gunicorn (see its homepage link below) $ ## docker run -p 5002:5002 amundsendev/amundsen-metadata gunicorn --bind 0.0.0.0:5002 metadata_service.metadata_wsgi -- In a different terminal, verify getting HTTP/1.0 200 OK $ curl -v http://localhost:5002/healthcheck
By default, Flask comes with Werkzeug webserver, which is for development. For production environment use production grade web server such as Gunicorn.
$ pip install gunicorn $ gunicorn metadata_service.metadata_wsgi
Here is documentation of gunicorn configuration.
By default, Metadata service uses LocalConfig that looks for Neo4j running in localhost.
In order to use different end point, you need to create Config suitable for your use case. Once config class has been created, it can be referenced by environment variable:
For example, in order to have different config for production, you can inherit Config class, create Production config and passing production config class into environment variable. Let's say class name is ProdConfig and it's in metadata_service.config module. then you can set as below:
This way Metadata service will use production config in production environment. For more information on how the configuration is being loaded and used, here's reference from Flask doc.
Amundsen Metadata service can use Apache Atlas as a backend. Some of the benefits of using Apache Atlas instead of Neo4j is that Apache Atlas offers plugins to several services (e.g. Apache Hive, Apache Spark) that allow for push based updates. It also allows to set policies on what metadata is accesible and editable by means of Apache Ranger.
If you would like to use Apache Atlas as a backend for Metadata service you will need to create a Config as mentioned above. Make sure to include the following:
PROXY_CLIENT = PROXY_CLIENTS['ATLAS'] # or env PROXY_CLIENT='ATLAS' PROXY_PORT = 21000 # or env PROXY_PORT PROXY_USER = 'atlasuser' # or env CREDENTIALS_PROXY_USER PROXY_PASSWORD = 'password' # or env CREDENTIALS_PROXY_PASSWORD
To start the service with Atlas from Docker. Make sure you have
atlasserver configured in DNS (or docker-compose)
$ docker run -p 5002:5002 --env PROXY_CLIENT=ATLAS --env PROXY_PORT=21000 --env PROXY_HOST=atlasserver --env CREDENTIALS_PROXY_USER=atlasuser --env CREDENTIALS_PROXY_PASSWORD=password amundsen-metadata:latest
The support for Apache Atlas is work in progress. For example, while Apache Atlas supports fine grained access, Amundsen does not support this yet.
We have Swagger documentation setup with OpenApi 3.0.2. This documentation is generated via Flasgger. When adding or updating an API please make sure to update the documentation. To see the documentation run the application locally and go to localhost:5002/apidocs/. Currently the documentation only works with local configuration.
Please visit Code Structure to read how different modules are structured in Amundsen Metadata service.
Roundtrip tests are a new feature - by implementing the abstract_proxy_tests and some test setup endpoints in the base_proxy, you can validate your proxy code against the actual data store. These tests do not run by default, but can be run by passing the
--roundtrip-[proxy] argument. Note this requires
a fully-configured backend to test against.
$ python -m pytest --roundtrip-neptune .