Spark Athena

AWS Athena data source for Apache Spark
Alternatives To Spark Athena
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Spark35,2912,39488221 hours ago46May 09, 2021211apache-2.0Scala
Apache Spark - A unified analytics engine for large-scale data processing
Cookbook11,362
2 months ago108apache-2.0
The Data Engineering Cookbook
God Of Bigdata7,901
20 days ago2
专注大数据学习面试,大数据成神之路开启。Flink/Spark/Hadoop/Hbase/Hive...
Zeppelin5,97832232 days ago2June 21, 2017135apache-2.0Java
Web-based notebook that enables data-driven, interactive data analytics and collaborative documents with SQL, Scala and more.
Sparkinternals4,665
a year ago27
Notes talking about the design and implementation of Apache Spark
Bigdl4,17710a day ago16April 19, 2021720apache-2.0Jupyter Notebook
Fast, distributed, secure AI for Big Data
Iceberg4,039
21 hours ago4May 23, 20221,298apache-2.0Java
Apache Iceberg
Tensorflowonspark3,849
514 days ago32April 21, 202211apache-2.0Python
TensorFlowOnSpark brings TensorFlow programs to Apache Spark clusters.
Koalas3,2281123 months ago47October 19, 2021109apache-2.0Python
Koalas: pandas API on Apache Spark
Spark Nlp3,1542221 hours ago90March 05, 202141apache-2.0Scala
State of the Art Natural Language Processing
Alternatives To Spark Athena
Select To Compare


Alternative Project Comparisons
Readme

AWS Athena Data Source for Apache Spark

This library provides support for reading an Amazon Athena table with Apache Spark via Athena JDBC Driver.

I developed this library for the following reasons:

Apache Spark is implemented to use PreparedStatement when reading data through JDBC. However, because Athena JDBC Driver provided by AWS only implements Statement of JDBC Driver Spec and PreparedStatement is not implemented, Apache Spark can not read Athena data through JDBC.

So I refer to the JDBC data source implementation code in spark-sql and change it to call Statement of Athena JDBC Driver so that Apache Spark can read Athena data.

Table of Contents

DataFrame Usage

You can register a Athena table and run SQL queries against it, or query with the Apache Spark SQL DSL.

import io.github.tmheo.spark.athena._

// Read a table from current region with default s3 staging directory.
val users = spark.read.athena("(select * from users)")

// Read a table from current region with s3 staging directory.
val users2 = spark.read.athena("users", "s3://staging_dir")

// Read a table from another region with s3 staging directory.
val users3 = spark.read.athena("users", "us-east-1", "s3://staging_dir")

Configuration

Option Description
dbtable Athena Table or SQL Query
region AWS Region. Default value is current region
s3_staging_dir The Amazon S3 location to which your query output is written. Default value is s3://aws-athena-query-results-${accountNumber}-${region}/
user AWS Access Key Id. If you do not specify user, password, the library will try to use InstanceProfileCredentialsProvider.
password AWS Secret Access Key. If you do not specify user, password, the library will try to use InstanceProfileCredentialsProvider.
Popular Spark Projects
Popular Apache Projects
Popular Data Processing Categories
Related Searches

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Amazon Web Services
Scala
Driver
Apache
Spark
Jdbc
Apache Spark