Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Data Science Ipython Notebooks | 25,025 | 22 days ago | 33 | other | Python | |||||
Data science Python notebooks: Deep learning (TensorFlow, Theano, Caffe, Keras), scikit-learn, Kaggle, big data (Spark, Hadoop MapReduce, HDFS), matplotlib, pandas, NumPy, SciPy, Python essentials, AWS, and various command lines. | ||||||||||
Dev Setup | 5,802 | 9 months ago | 34 | other | Python | |||||
macOS development environment setup: Easy-to-understand instructions with automated setup scripts for developer tools like Vim, Sublime Text, Bash, iTerm, Python data analysis, Spark, Hadoop MapReduce, AWS, Heroku, JavaScript web development, Android development, common data stores, and dev-based OS X defaults. | ||||||||||
Seldon Server | 1,420 | 3 years ago | 44 | June 28, 2017 | 26 | apache-2.0 | Java | |||
Machine Learning Platform and Recommendation Engine built on Kubernetes | ||||||||||
Aws Serverless Java Container | 1,350 | 89 | 15 | a day ago | 30 | June 10, 2022 | 47 | apache-2.0 | Java | |
A Java wrapper to run Spring, Jersey, Spark, and other apps inside AWS Lambda. | ||||||||||
Aws Glue Samples | 1,282 | 4 days ago | 36 | mit-0 | Python | |||||
AWS Glue code samples | ||||||||||
Devops Python Tools | 651 | 18 days ago | 30 | mit | Python | |||||
80+ DevOps & Data CLI Tools - AWS, GCP, GCF Python Cloud Functions, Log Anonymizer, Spark, Hadoop, HBase, Hive, Impala, Linux, Docker, Spark Data Converters & Validators (Avro/Parquet/JSON/CSV/INI/XML/YAML), Travis CI, AWS CloudFormation, Elasticsearch, Solr etc. | ||||||||||
Flintrock | 604 | 4 | 9 months ago | 13 | June 13, 2021 | 36 | other | Python | ||
A command-line tool for launching Apache Spark clusters. | ||||||||||
Aws Glue Libs | 514 | 5 months ago | 80 | other | Python | |||||
AWS Glue Libraries are additions and enhancements to Spark for ETL operations. | ||||||||||
Spark Redshift | 514 | 4 | 1 | 4 years ago | 10 | November 01, 2016 | 134 | apache-2.0 | Scala | |
Redshift data source for Apache Spark | ||||||||||
Agile_data_code_2 | 435 | 2 months ago | 7 | mit | Jupyter Notebook | |||||
Code for Agile Data Science 2.0, O'Reilly 2017, Second Edition |
variant-spark is a scalable toolkit for genome-wide association studies optimized for GWAS like datasets.
Machine learning methods and, in particular, random forests (RFs) are a promising alternative to standard single SNP analyses in genome-wide association studies (GWAS). RFs provide variable importance measures to rank SNPs according to their predictive power. Although there are number of existing random forest implementations available, some even parallel or distributed such as: Random Jungle, ranger or SparkML, most of them are not optimized to deal with GWAS datasets, which usually come with thousands of samples and millions of variables.
variant-spark currently provides the basic functionality of building random forest model and estimating variable importance with mean decrease gini method and can operate on VCF and CSV files. Future extensions will include support of other importance measures, variable selection methods and data formats.
variant-spark utilizes a novel approach of building random forest from data in transposed representation, which allows it to efficiently deal with even extremely wide GWAS datasets. Moreover, since the most common genomics variant calls VCF and uses the transposed representation, variant-spark can work directly with the VCF data, without the costly pre-processing required by other tools.
variant-spark is built on top of Apache Spark – a modern distributed framework for big data processing, which gives variant-spark the ability to to scale horizontally on both bespoke cluster and public clouds.
The potential users include:
Please feel free to add issues and/or upvote issues you care about. Also join the Gitter chat. We also started ReadTheDocs and there is always the this repo's issues page for you to add requests. Thanks for your support.
To learn more watch this video from YOW! Brisbane 2017.
variant-spark requires java jdk 1.8+ and maven 3+
In order to build the binaries use:
mvn clean install
For python variant-spark requires python 3.6+ with pip.
The other packages required for development are listed in dev/dev-requirements.txt
and can be installed with:
pip install -r dev/dev-requirements.txt
or with:
./dev/py-setup.sh
The complete built including all check can be run with:
./dev/build.sh
variant-spark requires an existing spark 3.1+ installation (either a local one or a cluster one).
To run variant-spark use:
./variant-spark [(--spark|--local) <spark-options>* --] [<command>] <command-options>*
In order to obtain the list of the available commands use:
./variant-spark -h
In order to obtain help for a specific command (for example importance
) use:
./variant-spark importance -h
You can use --spark
marker before the command to pass spark-submit
options to variant-spark. The list of spark options needs to be terminated with --
, e.g:
./variant-spark --spark --master yarn-client --num-executors 32 -- importance ....
Please, note that --spark
needs to be the first argument of variant-spark
You can also run variant-spark in the --local
mode. In this mode variant-spark will ignore any Hadoop or Spark configuration files and run in the local mode for both Hadoop and Spark. In particular in this mode all file paths are interpreted as local file system paths. Also any parameters passed after --local
and before --
are ignored. For example:
./variant-spark --local -- importance -if data/chr22_1000.vcf -ff data/chr22-labels.csv -fc 22_16051249 -v -rn 500 -rbs 20 -ro
Note:
The difference between running in --local
mode and in --spark
with local
master is that in the latter case Spark uses the hadoop filesystem configuration and the input files need to be copied to this filesystem (e.g. HDFS)
Also the output will be written to the location determined by the hadoop filesystem settings. In particular paths without schema e.g. 'output.csv' will be resolved with the hadoop default filesystem (usually HDFS)
To change this behavior you can set the default filesystem in the command line using spark.hadoop.fs.default.name
option. For example to use local filesystem as the default use:
./variant-spark --spark ... --conf "spark.hadoop.fs.default.name=file:///" ... -- importance ... -of output.csv
You can also use the full URI with the schema to address any filesystem for both input and output files e.g.:
./variant-spark --spark ... --conf "spark.hadoop.fs.default.name=file:///" ... -- importance -if hdfs:///user/data/input.csv ... -of output.csv
There are multiple methods for running variant-spark examples
variant-spark comes with a few example scripts in the scripts
directory that demonstrate how to run its commands on sample data .
There is a few small data sets in the data
directory suitable for running on a single machine. For example
./examples/local_run-importance-ch22.sh
runs variable importance command on a small sample of the chromosome 22 vcf file (from 1000 Genomes Project)
The full size examples require a cluster environment (the scripts are configured to work with Spark on YARN).
The data required for the examples can be obtained from: https://bitbucket.csiro.au/projects/PBDAV/repos/variant-spark-data
This repository uses the git Large File Support extension, which needs to be installed first (see: https://git-lfs.github.com/)
Clone the variant-spark-data
repository and then to install the test data into your hadoop filesystem use:
./install-data
By default the sample data will installed into the variant-spark-data\input
sub directory of your HDFS home directory.
You can choose a different location by setting the VS_DATA_DIR
environment variable.
After the test data has been successfully copied to HDFS you can run examples scripts, e.g.:
./examples/yarn_run-importance-ch22.sh
Note: if you installed the data to a non default location the VS_DATA_DIR
needs to be set accordingly when running the examples
VariantSpark can easily be used in AWS and Azure. For more examples and information, check the cloud folder. For a quick start, check the few pointers below.
VariantSpark is now available on AWS Marketplace. Please read the Guidlines for specification and step-by-step instructions.
VariantSpark can be easily deployed in Azure Databricks through the button below. Please read the VariantSpark azure manual for specification and step-by-step instructions.
JsonRfAnalyser is a python program that looks into the JSON RandomForest model and list variables on each tree and branch. Please read README to see the complete list of functionalities.
rfview.html is a web program (run locally on your machine) where you can upload the json model produced by variantspark and it visualises trees in the model. You can identify which tree to be visualised. Node color and node labels could be set to different parameters such as number of samples in the node or the node impurity. It uses vis.js for tree Visualisation.