|Project Name||Stars||Downloads||Repos Using This||Packages Using This||Most Recent Commit||Total Releases||Latest Release||Open Issues||License||Language|
|Recommenders||15,799||2||19 hours ago||11||April 01, 2022||165||mit||Python|
|Best Practices on Recommendation Systems|
|Awesome Kubernetes||13,893||21 days ago||9||other||Shell|
|A curated list for awesome kubernetes sources :ship::tada:|
|Computervision Recipes||8,950||4 months ago||65||mit||Jupyter Notebook|
|Best Practices, code samples, and documentation for Computer Vision.|
|Metaflow||6,693||19 hours ago||57||September 17, 2022||270||apache-2.0||Python|
|:rocket: Build and manage real-life data science projects with ease!|
|Synapseml||4,295||1||a day ago||5||January 12, 2022||305||mit||Scala|
|Simple and Distributed Machine Learning|
|Machinelearningnotebooks||3,705||a day ago||371||mit||Jupyter Notebook|
|Python notebooks with ML and deep learning examples with Azure Machine Learning Python SDK | Microsoft|
|Forecasting||2,540||a month ago||mit||Python|
|Time Series Forecasting Best Practices & Examples|
|Spark||1,878||16||4 months ago||20||June 01, 2022||181||mit||C#|
|.NET for Apache® Spark™ makes Apache Spark™ easily accessible to .NET developers.|
|Feathr||1,788||a day ago||147||apache-2.0||Scala|
|Feathr – A scalable, unified data and AI engineering platform for enterprise|
|Seldon Server||1,420||3 years ago||44||June 28, 2017||26||apache-2.0||Java|
|Machine Learning Platform and Recommendation Engine built on Kubernetes|
This toolbox aims at providing low-level and high-level building blocks for Machine Learning / AI researchers and practitioners. It helps to simplify and streamline work on deep learning models for healthcare and life sciences, by providing tested components (data loaders, pre-processing), deep learning models, and cloud integration tools.
This repository consists of two Python packages, as well as project-specific codebases:
For the full toolbox (this will also install
pip, by running
pip install hi-ml
For just the AzureML helper functions:
pip, by running
pip install hi-ml-azure
For the histopathology workflows, please follow the instructions here.
If you would like to contribute to the code, please check the developer guide.
The detailed package documentation, with examples and API reference, is on readthedocs.
Use case: you have a Python script that does something - that could be training a model, or pre-processing some data.
hi-ml-azure package can help easily run that on Azure Machine Learning (AML) services.
Here is an example script that reads images from a folder, resizes and saves them to an output folder:
from pathlib import Path if __name__ == '__main__': input_folder = Path("/tmp/my_dataset") output_folder = Path("/tmp/my_output") for file in input_folder.glob("*.jpg"): contents = read_image(file) resized = contents.resize(0.5) write_image(output_folder / file.name)
Doing that at scale can take a long time. We'd like to run that script in AzureML, consume the data from a folder in blob storage, and write the results back to blob storage.
hi-ml-azure package, you can turn that script into one that runs on the cloud by adding one function call:
from pathlib import Path from health_azure import submit_to_azure_if_needed if __name__ == '__main__': current_file = Path(__file__) run_info = submit_to_azure_if_needed(compute_cluster_name="preprocess-ds12", input_datasets=["images123"], # Omit this line if you don't create an output dataset (for example, in # model training scripts) output_datasets=["images123_resized"], default_datastore="my_datastore") # When running in AzureML, run_info.input_datasets and run_info.output_datasets will be populated, # and point to the data coming from blob storage. For runs outside AML, the paths will be None. # Replace the None with a meaningful path, so that we can still run the script easily outside AML. input_dataset = run_info.input_datasets or Path("/tmp/my_dataset") output_dataset = run_info.output_datasets or Path("/tmp/my_output") files_processed =  for file in input_dataset.glob("*.jpg"): contents = read_image(file) resized = contents.resize(0.5) write_image(output_dataset / file.name) files_processed.append(file.name) # Any other files that you would not consider an "output dataset", like metrics, etc, should be written to # a folder "./outputs". Any files written into that folder will later be visible in the AzureML UI. # run_info.output_folder already points to the correct folder. stats_file = run_info.output_folder / "processed_files.txt" stats_file.write_text("\n".join(files_processed))
Once these changes are in place, you can submit the script to AzureML by supplying an additional
on the commandline, like
python myscript.py --azureml.
For details, please refer to the onboarding page.
For more examples, please see examples.md.
If you've found a bug in the code, please check the issues page. If no existing issue exists, please open a new one. Be sure to include
We welcome all contributions that help us achieve our aim of speeding up ML/AI research in health and life sciences. Examples of contributions are
Please check the detailed page about contributions.
You are responsible for the performance, the necessary testing, and if needed any regulatory clearance for any of the models produced by this toolbox.
If you have any feature requests, or find issues in the code, please create an issue on GitHub.
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.