This project riffs off of the Dynamo DB Golang samples and the Serverless Framework Go example to create an example of how to build a simple API Gateway -> Lambda -> DynamoDB set of methods.
Note that, instead of using the create_table.go
to set up the initial table like the AWS code example does, the resource building mechanism that Serverless provides is used. Individual code is organized as follows:
Note that given the recency of Golang support on both AWS Lambda and the Serverless Framework, combined with my own Go noob-ness, I'm not entirely certain this is the best layout but it was functional. My hope is that it helps spark a healthy debate over what a Go Serverless project should look like.
If you are a Serverless Framework rookie, follow the installation instructions here. If you are a grizzled vet, be sure that you have v1.26 or later as that's the version that introduces Golang support. You'll also need to install Go and it's dependency manager, dep.
When both of those tasks are done, cd into your GOPATH
(more than likely ~/go/src/) and clone this project into that folder. Then cd into the resulting go-sls-crudl
folder and compile the source with make
:
$ make
dep ensure
env GOOS=linux go build -ldflags="-s -w" -o bin/get functions/get.go
env GOOS=linux go build -ldflags="-s -w" -o bin/post functions/post.go
env GOOS=linux go build -ldflags="-s -w" -o bin/delete functions/delete.go
env GOOS=linux go build -ldflags="-s -w" -o bin/put functions/put.go
env GOOS=linux go build -ldflags="-s -w" -o bin/list-by-year functions/list-by-year.go
What is this makefile doing? First, it runs the dep ensure
command, which will scan your underlying .go files looking for dependencies to install, which it will grab off of Github as needed and place under a newly created vendor
folder under go-sls-crudl
. Then, it'll compile the individual function files, placing the resulting binaries in the bin
folder.
If you look at the serverless.yml
file, it makes references to those recently compiled function binaries, one for each function in our service and each one corresponding to a different verb/path in our API we're creating. Deploy the entire service with the 'sls' command:
$ sls deploy
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Creating Stack...
Serverless: Checking Stack create progress...
.....
Serverless: Stack create finished...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service .zip file to S3 (15.19 MB)...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
...................................................................................................
Serverless: Stack update finished...
Service Information
service: go-sls-crudl
stage: dev
region: us-east-1
stack: go-sls-crudl-dev
api keys:
None
endpoints:
GET - https://XXXXXXXXXX.execute-api.us-east-1.amazonaws.com/dev/movies/{year}
GET - https://XXXXXXXXXX.execute-api.us-east-1.amazonaws.com/dev/movies/{year}/{title}
POST - https://XXXXXXXXXX.execute-api.us-east-1.amazonaws.com/dev/movies
DELETE - https://XXXXXXXXXX.execute-api.us-east-1.amazonaws.com/dev/movies/{year}/{title}
PUT - https://XXXXXXXXXX.execute-api.us-east-1.amazonaws.com/dev/movies
functions:
list: go-sls-crudl-dev-list
get: go-sls-crudl-dev-get
post: go-sls-crudl-dev-post
delete: go-sls-crudl-dev-delete
put: go-sls-crudl-dev-put
When done, you can find the new DynamoDB table in the AWS Console, which should initially look like this:
and your <base URL>
will be of the format 'https://XXXXXXXXXX.execute-api.us-east-1.amazonaws.com/dev/movies' where XXXXXXXXXX
will be some random string generated by AWS.
The development cycle would then be:
make
to compile the binariessls deploy
to construct the service and push changes to lambdacurl
to then interrogate the API as described belowOnce deployed and substituting your <base URL>
the following CURL commands can be used to interact with the resulting API, whose results can be confirmed in the DynamoDB console
curl -X POST https:<base URL> -d @data/post1.json
Which should result in the DynamoDB table looking like this:
Rinse/repeat for other data files to yeild:
Using the year and title (replacing spaces wiht '-' or '+'), you can now obtain an item as follows (prettified output):
curl https://<base URL>/2013/Hunger-Games-Catching-Fire
{
"year": 2013,
"title": "Hunger Games Catching Fire",
"info": {
"plot": "Katniss Everdeen and Peeta Mellark become targets of the Capitol after their victory in the 74th Hunger Games sparks a rebellion in the Districts of Panem.",
"rating": 7.6
}
}
You can list items by year as follows (prettified output):
curl https://<base URL>/2013
[
{
"year": 2013,
"title": "Hunger Games Catching Fire",
"info": {
"plot": "",
"rating": 0
}
},
{
"year": 2013,
"title": "Turn It Down Or Else",
"info": {
"plot": "",
"rating": 0
}
}
]
Using the same year and title specifiers, you can delete as follows:
curl -X DELETE https://<base URL>/2013/Hunger-Games-Catching-Fire
Which should result in the DynamoDB table looking like this:
You can update as follows:
curl -X PUT https:<base URL> -d @data/put3.json
Which should result in the DynamoDB table looking like this: