Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Dbeaver | 31,197 | 9 hours ago | 1,751 | apache-2.0 | Java | |||||
Free universal database tool and SQL client | ||||||||||
Redash | 22,844 | a day ago | 2 | May 05, 2020 | 777 | bsd-2-clause | Python | |||
Make Your Company Data Driven. Connect to any data source, easily visualize, dashboard and share your data. | ||||||||||
Aws Sdk Pandas | 3,371 | 34 | 9 hours ago | 125 | June 28, 2022 | 53 | apache-2.0 | Python | ||
pandas on AWS - Easy integration with Athena, Glue, Redshift, Timestream, Neptune, OpenSearch, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretManager, PostgreSQL, MySQL, SQLServer and S3 (Parquet, CSV, JSON and EXCEL). | ||||||||||
Node Orm2 | 3,069 | 700 | 98 | 9 months ago | 100 | June 22, 2022 | 224 | mit | JavaScript | |
Object Relational Mapping | ||||||||||
Sqlglot | 2,935 | 2 | 10 hours ago | 161 | July 06, 2022 | mit | Python | |||
Python SQL Parser and Transpiler | ||||||||||
Fluentmigrator | 2,916 | 548 | 130 | 10 days ago | 52 | January 14, 2022 | 221 | apache-2.0 | C# | |
Fluent migrations framework for .NET | ||||||||||
Tbls | 2,228 | 4 | 16 hours ago | 31 | May 28, 2022 | 26 | mit | Go | ||
tbls is a CI-Friendly tool for document a database, written in Go. | ||||||||||
Jailer | 1,579 | 14 hours ago | 57 | July 04, 2022 | apache-2.0 | Java | ||||
Database Subsetting and Relational Data Browsing Tool. | ||||||||||
Db.py | 1,201 | 26 | 1 | 3 years ago | 35 | March 31, 2017 | 32 | bsd-2-clause | Python | |
db.py is an easier way to interact with your databases | ||||||||||
Lucid | 846 | 599 | 57 | 4 months ago | 127 | June 23, 2022 | 21 | mit | TypeScript | |
AdonisJS SQL ORM. Supports PostgreSQL, MySQL, MSSQL, Redshift, SQLite and many more |
AWS Data Wrangler is now AWS SDK for pandas (awswrangler). Were changing the name we use when we talk about the library, but everything else will stay the same. Youll still be able to install using pip install awswrangler
and you wont need to change any of your code. As part of this change, weve moved the library from AWS Labs to the main AWS GitHub organisation but, thanks to the GitHubs redirect feature, youll still be able to access the project by its old URLs until you update your bookmarks. Our documentation has also moved to aws-sdk-pandas.readthedocs.io, but old bookmarks will redirect to the new site.
Pandas on AWS
Easy integration with Athena, Glue, Redshift, Timestream, OpenSearch, Neptune, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretManager, PostgreSQL, MySQL, SQLServer and S3 (Parquet, CSV, JSON and EXCEL).
An AWS Professional Service open source initiative | [email protected]
Source | Downloads | Installation Command |
---|---|---|
PyPi | pip install awswrangler |
|
Conda | conda install -c conda-forge awswrangler |
For platforms without PyArrow 3 support (e.g. EMR, Glue PySpark Job, MWAA):
pip install pyarrow==2 awswrangler
Installation command: pip install awswrangler
For platforms without PyArrow 3 support (e.g. EMR, Glue PySpark Job, MWAA):
pip install pyarrow==2 awswrangler
import awswrangler as wr
import pandas as pd
from datetime import datetime
df = pd.DataFrame({"id": [1, 2], "value": ["foo", "boo"]})
# Storing data on Data Lake
wr.s3.to_parquet(
df=df,
path="s3://bucket/dataset/",
dataset=True,
database="my_db",
table="my_table"
)
# Retrieving the data directly from Amazon S3
df = wr.s3.read_parquet("s3://bucket/dataset/", dataset=True)
# Retrieving the data from Amazon Athena
df = wr.athena.read_sql_query("SELECT * FROM my_table", database="my_db")
# Get a Redshift connection from Glue Catalog and retrieving data from Redshift Spectrum
con = wr.redshift.connect("my-glue-connection")
df = wr.redshift.read_sql_query("SELECT * FROM external_schema.my_table", con=con)
con.close()
# Amazon Timestream Write
df = pd.DataFrame({
"time": [datetime.now(), datetime.now()],
"my_dimension": ["foo", "boo"],
"measure": [1.0, 1.1],
})
rejected_records = wr.timestream.write(df,
database="sampleDB",
table="sampleTable",
time_col="time",
measure_col="measure",
dimensions_cols=["my_dimension"],
)
# Amazon Timestream Query
wr.timestream.query("""
SELECT time, measure_value::double, my_dimension
FROM "sampleDB"."sampleTable" ORDER BY time DESC LIMIT 3
""")
The best way to interact with our team is through GitHub. You can open an issue and choose from one of our templates for bug reports, feature requests... You may also find help on these community resources:
awswrangler
Please send a Pull Request with your resource reference and @githubhandle.
Enabling internal logging examples:
import logging
logging.basicConfig(level=logging.INFO, format="[%(name)s][%(funcName)s] %(message)s")
logging.getLogger("awswrangler").setLevel(logging.DEBUG)
logging.getLogger("botocore.credentials").setLevel(logging.CRITICAL)
Into AWS lambda:
import logging
logging.getLogger("awswrangler").setLevel(logging.DEBUG)
Knowing which companies are using this library is important to help prioritize the project internally. If you would like us to include your companys name and/or logo in the README file to indicate that your company is using the AWS SDK for pandas, please raise a "Support Us" issue. If you would like us to display your companys logo, please raise a linked pull request to provide an image file for the logo. Note that by raising a Support Us issue (and related pull request), you are granting AWS permission to use your companys name (and logo) for the limited purpose described here and you are confirming that you have authority to grant such permission.