I wanted to deploy Jitsi under a subdomain on AWS in 5 minutes, so I built this. My partner used it exclusively, instead of Zoom, to teach her modern dance classes to students during the coronavirus quarantine. Give it a try.
subdomainto be the subdomain you wish your installation to appear under, for example
regionto be the AWS region. I use
us-west-2. This must be the same region as your keypair and certificate. See the full list.
instance_typeto a machine with the power you want. See the full list.
key_nameto be the name of your SSH keypair created in AWS.
dns_zone. It will look like
cert_arn. It will start with
jitsi_branchcontrols which branch of docker-jitsi-meet is deployed to the EC2 instance.
tf_jitsi_branchcontrols which branch of this repo is deployed to the EC2 instance.
scripts/provision_subdomain.sh. This will
And wait while Terraform spins up your infrastructure. When the instance has been brought up, you'll see the following output:
Outputs: domain = test.myjitsiserver.com. public_ip = 184.108.40.206
This is where you can access your Jitsi installation. The server is still setting up though, however, so give it a few minutes before hitting the url. It typically takes around 5 minutes before the url will be live.
This will teardown an individual subdomain but leave up the common infrastructure that other subdomains may be relying on.
scripts/common.shis set to the values of the subdomain you wish to destroy. This is important.
This will teardown the common infrastructure for a particular region.
TBD. This depends on your instance type and the amount of outbound traffic, which AWS bills at $0.09/GB. Your bandwidth depends on your participants as well, both the number and the browsers that they use, as some browsers use simulcast (resulting in more efficient bandwidth usage), while others don't.
There's two Terraform modules: "base" and "jitsi". I structured it this way because I wanted the flexibility to create multiple subdomain deployments using a common infrastructure. This meant that the base had to be separately managed TF state.
The "base" module provides common infrastructure for many installations of "jitsi" modules. It creates a per-region workspace (eg: "us-west-2") for its Terraform state. This means you can have multiple base infrastructures in different regions. A per-region base infrastructure is required as you cannot link compute resources to subnets outside of your region.
The "jitsi" module provides an individual installation of Jitsi under a subdomain. It creates per-subdomain workspaces for its Terraform state. This means you can have multiple Jitsi installations, under different subdomains, under a common hostname, all sharing the common "base" module infrastructure. For example, you could have:
And each of these subdomains is running on separate hardware provisioned with tf-jitsi.
If you plan to customize tf-jitsi, there's a few tricks you can use.
You can specify custom branches in
scripts/common.sh. You'll also need to change
to use your own fork repo.
If you are rapidly iterating on tf-jitsi changes, and you just want to re-deploy the EC2 instance without touching the
rest of the infrastructure, you can use
terraform taint via the
scripts/taint_instance.sh script. This will mark
the EC2 instance resource as "tainted", so the next time you run
scripts/provision_subdomain.sh, that particular
resource (and any of its dependencies) will be re-created, while leaving alone much of the other infrastructure.
ssh -i ~/.ssh/your_ssh_keypair.pem [email protected]
systemctl status jitsi
It should say "active (running)"
journalctl -u jitsi
docker ps should list running containers for the following images:
curl -I http://localhost:81 should show:
HTTP/1.1 301 Moved Permanently Server: nginx Date: Tue, 28 Apr 2020 15:14:09 GMT Content-Type: text/html Content-Length: 178 Connection: keep-alive Location: https://localhost/
curl -I http://localhost should show:
HTTP/1.1 200 OK Server: nginx Date: Tue, 28 Apr 2020 15:14:15 GMT Content-Type: text/html Connection: keep-alive