Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Ethereum Etl | 2,456 | 4 days ago | 37 | May 24, 2022 | 125 | mit | Python | |||
Python scripts for ETL (extract, transform and load) jobs for Ethereum blocks, transactions, ERC20 / ERC721 tokens, transfers, receipts, logs, contracts, internal transactions. Data is available in Google BigQuery https://goo.gl/oY5BCQ | ||||||||||
Dynamodb Onetable | 538 | 2 | 5 days ago | 92 | September 09, 2022 | 13 | mit | JavaScript | ||
DynamoDB access and management for one table designs with NodeJS | ||||||||||
Parquet4s | 236 | 2 | 6 days ago | 47 | December 15, 2022 | 12 | mit | Scala | ||
Read and write Parquet in Scala. Use Scala classes as schema. No need to start a cluster. | ||||||||||
Evb Cli | 200 | 3 months ago | 57 | June 30, 2022 | 15 | JavaScript | ||||
Pattern generator and debugging tool for Amazon EventBridge | ||||||||||
Aws Config Resource Schema | 162 | 12 days ago | 16 | apache-2.0 | ||||||
AWS Config resource schema define the properties and types of AWS Config resource configuration items (CIs). Resource CI schema are used by developers when performing advanced resource queries and when processing CI data. | ||||||||||
Eventbridge Atlas | 135 | 6 months ago | 4 | mit | JavaScript | |||||
Open-source tool to document, discover, and share your Amazon EventBridge schemas. | ||||||||||
Botoform | 100 | 3 years ago | Python | |||||||
Architect infrastructure on AWS using YAML | ||||||||||
Dynamodump | 100 | a year ago | 12 | February 26, 2022 | 4 | mit | JavaScript | |||
Node CLI for backing up and restoring schema+data from DynamoDB tables | ||||||||||
Flutter Aws Appsync Sample | 94 | 3 years ago | 2 | Swift | ||||||
Create custom bridge for AWS AppSync in Flutter | ||||||||||
Graphql Recipes | 83 | 3 years ago | 1 | |||||||
A list of GraphQL recipes that, when used with the Amplify CLI, will deploy an entire AWS AppSync GraphQL backend. |
This is a Kafka consumer that reads mock Clickstream data for an imaginary e-commerce site from an Apache Kafka topic. It utilizes a Schema Registry and reads Avro encoded events. It supports both the AWS Glue Schema Registry and a 3rd party Schema Registry.
For the AWS Glue Schema Registry, the consumer accepts parameters to use a specific registry, pre-created schema name, schema description and a secondary deserialzer. If those parameters are not specified but using the AWS Glue Schema registry is specified, it uses the default schema registry. Some of the parameters may need to be specified if others are not. The AWS Glue Schema Registry provides open sourced serde libraries for serialization and deserialization which use the AWS default credentials chain (by default) for credentials and the region to construct an endpoint. One important thing to call out here is the use of the secondary deserializer with the AWSKafkaAvroDeserializer. The secondary deserializer allows the KafkaAvroDeserializer that integrates with the AWS Glue Schema Registry to use a specified secondary deserializer that points to a 3rd party Schema Registry. This enables the AWSKafkaAvroDeserializer to deserialize records that were not serialized using the AWS Glue Schema Registry. This is primarily useful when migrating from a 3rd party Schema Registry to the AWS Glue Schema Registry. With the secondary deserializer specified, the consumer can seamlessly deserialize records using both a 3rd party Schema Registry and the AWS Glue Schema Registry. In order to use it, the properties specific to the 3rd party deserializer need to be specified as well. For further information see the AWS Glue Schema Registry documentation.
For the 3rd party Schema Registry, the location of the Schema Registry needs to be specified in a consumer.properties file.
The consumer supports running multiple consumers in multiple threads. The number of consumers to be run in separate threads can be specified and they will be run in the same consumer group.
This consumer works with the constructs in MirrorMaker2. MirrorMaker v2 (MM2), which ships as part of Apache Kafka in version 2.4.0 and above, detects and replicates topics, topic partitions, topic configurations and topic ACLs to the destination cluster that matches a regex topic pattern. Further, it checks for new topics that matches the topic pattern or changes to configurations and ACLs at regular configurable intervals. The topic pattern can also be dynamically changed by changing the configuration of the MirrorSourceConnector. Therefore MM2 can be used to migrate topics and topic data to the destination cluster and keep them in sync.
When replicating messages in topics between clusters, the offsets in topic partitions could be different due to producer retries or more likely due to the fact that the retention period in the source topic could've passed and messages in the source topic already deleted when replication starts. Even if the the __consumer_offsets topic is replicated, the consumers, on failover, might not find the offsets at the destination.
MM2 provides a facility that keeps source and destination offsets in sync. The MM2 MirrorCheckpointConnector periodically emits checkpoints in the destination cluster, containing offsets for each consumer group in the source cluster. The connector periodically queries the source cluster for all committed offsets from all consumer groups, filters for topics being replicated, and emits a message to a topic like <source-cluster-alias>.checkpoints.internal in the destination cluster. These offsets can then be queried and retrieved by using provided classes RemoteClusterUtils or MirrorClient. This consumer accepts a parameter (-flo) which indicates if the consumer has failed over. In which case, it utilizes the RemoteClusterUtils class to get the translated offsets at the destination, and seeks to it. Since the MirrorCheckpointConnector emits checkpoints periodically, there could be additional offsets read by the consumer at the source that were not checkpointed when the consumer failed over. To account for that in order to minimize duplicates, this consumer writes the last offset processed for each topic partition to a file when stopped. On failover, it reads the file and calculates the difference in offsets between the checkpointed offsets and the last offset read at the source and skips the equivalent number of messages at the destination after translation and resumes reading.
In addition, if the consumer offsets are being synced between the MM2 checkpointed offsets and the __consumer_offsets at the destination in the background, this consumer can also be used to simply start reading from the last committed offset at the destination.
This consumer supports TLS in-transit encryption, TLS mutual authentication and SASL/SCRAM authentication with Amazon MSK. See the relevant parameters to enable them below.
This consumer can be used to read other types of events by modifying the RunConsumer class.
This consumer depends on another library to get secrets from AWS Secrets Manager for SAS/SCRAM authentication with Amazon MSK. The library needs to be installed first before creating the jar file for the consumer.
git clone https://github.com/aws-samples/sasl-scram-secrets-manager-client-for-msk.git
cd sasl-scram-secrets-manager-client-for-msk
mvn clean install -f pom.xml
git clone https://github.com/aws-samples/mirrormaker2-msk-migration.git
cd mirrormaker2-msk-migration.git
mvn clean install -f pom.xml
mvn clean package -f pom.xml
java -jar KafkaClickstreamConsumer-1.0-SNAPSHOT.jar -t ExampleTopic -pfp /tmp/kafka/consumer.properties -nt 3 -rf 10800 -mtls -src msksource
java -jar KafkaClickstreamConsumer-1.0-SNAPSHOT.jar -t ExampleTopic -pfp /tmp/kafka/consumer.properties -nt 3 -rf 10800 -mtls -flo -src msksource -dst mskdest
java -jar KafkaClickstreamConsumer-1.0-SNAPSHOT.jar -t ExampleTopic -pfp /tmp/kafka/consumer.properties -nt 3 -rf 10800 -sse -ssu nancy -src msksource
java -jar KafkaClickstreamConsumer-1.0-SNAPSHOT.jar -t ExampleTopic -pfp /tmp/kafka/consumer.properties -nt 3 -rf 10800 -iam