sudo aureportand the underlying events with
sudo ausearch --rawor filter them with
sudo ausearch --success no. Optionally point to the rules in /etc/audit/audit.rules.
ssh [email protected]with a bad password and show the failed login on the [Filebeat System] SSH login attempts dashboard.
cat /etc/passwdand find the event with
tags is developers-passwd-read.
service nginx restartand pick the
elastic-adminuser to run the command. Show the execution on the [Auditbeat Auditd] Executions ECS dashboard by filtering down to the
ssh [email protected]check the directory /home/elastic-user and read the file /home/elastic-user/secret.txt (will require sudo). Search for the tag
power-abuseto see the violation.
tags is elevated-privs.
netcat -l 1025and start a chat with
telnet <hostname> 1025. Find it in the [Auditbeat System] Socket Dashboard ECS in the destination ports list and filter down on it. Optionally show the alternative with Auditd by filtering in Discover on
firejail --noprofile --seccomp.drop=bind -c nc -v -l 1025. This will show up as
"event.action": "violated-seccomp-policy"in the Auditbeat events. Alternatively you can find the event with
dmesgon the shell.
/var/www/html/.index.html. See the change in the [Auditbeat File Integrity] Overview ECS dashboard. Depending on the editor the actions might be slightly different; nano will generate an
updatedevent wheras vi does a
1025(the port). Drop the process
netcatinto the Timeline view and see all the related details for it. Add a comment to the event when we opened the port.
Make sure you have run this before the demo.
AWS_SECRET_ACCESS_KEY. Protip: Use https://awesomeopensource.com/project/sorah/envchain to keep your environment variables safe.
elastic_version, enable Kibana as well as the GeoIP & user agent plugins, and set the environment variables with the values for
ELASTICSEARCH_PASSWORD, as well as
TF_VAR_zone_id. If you haven't created the Hosted Zone yet, you should set it up in the AWS Console first and then set the environment variable.
terraform initfirst. Then create the keypair, DNS settings, and instances with
When you are done, remove the instances, DNS settings, and key with
To build an AWS AMI for Strigo, use Packer. Using the Ansible Local Provisioner you only need to have Packer installed locally (no Ansible). Build the AMI with
packer build packer.json and set up the training class on Strigo with the generated AMI and the user
cloud: true you won't add a local Elasticsearch and Kibana instance. But you must then add the
elasticsearch_password account to that cloud account for the setup to work, add
cloud.id to all the Beats, and restart them.
If things are failing for some reason: Run
packer build -debug packer-ansible.yml, which will keep the instance running and save the SSH key in the current directory. Connect to it with
ssh -i ec2_amazon-ebs.pem [email protected]; open ports as needed in the AWS Console since the instance will only open TCP/22 by default.