LogDNA Agent streams from log files to your LogDNA account. Works with Linux, Windows, and macOS Servers.
Check out the official LogDNA site to learn how to sign up for an account and get started.
Follow these instructions to run the LogDNA agent from source:
git clone https://github.com/logdna/logdna-agent.git cd logdna-agent npm install # help sudo node index.js --help # configure sudo node index.js -k <YOUR LOGDNA INGESTION KEY> # On Linux, this will generate a config file at: /etc/logdna.conf # On Windows, this will generate a config file at: C:\ProgramData\logdna\logdna.conf # on Linux, /var/log is monitored/added by default (recursively). You can optionally specify more folders # on Windows, C:\ProgramData\logs is monitored/added by default (recursively). You can optionally specify more folders sudo node index.js -d /path/to/log/folders -d /path/to/2nd/folder sudo node index.js -d /var/log # folder only assumes *.log + extensionless files sudo node index.js -d "/var/log/*.txt" # supports glob patterns sudo node index.js -d "/var/log/**/*.txt" # *.txt in any subfolder sudo node index.js -d "/var/log/**/myapp.log" # myapp.log in any subfolder sudo node index.js -d "/var/log/+(name1|name2).log" # supports extended glob patterns sudo node index.js -e /var/log/nginx/error.log # exclude specific files from -d sudo node index.js -f /usr/local/nginx/logs/access.log # add specific files sudo node index.js -t production # tags sudo node index.js -t production,app1=/opt/app1 # tags for specific paths # other commands sudo node index.js -l # show all saved options from config sudo node index.js -l tags,key,logdir # show specific entries from config sudo node index.js -u tags # unset tags sudo node index.js -u tags,logdir # unset tags and logdir sudo node index.js -u all # unset everything except ingestion key # start the agent sudo node index.js
Note that when using glob patterns with
index.js, you must enclose the pattern in double quotes.
Normally a config file is automatically generated (e.g. when you set a key using
index.js -k) and updated (e.g. when you add a directory
index.js -d) but you can create your own config file
/etc/logdna.conf on Linux and
C:\ProgramData\logdna\logdna.conf on Windows and
save your settings there:
logdir = /var/log/myapp,/path/to/2nd/dir key = <YOUR LOGDNA INGESTION KEY>
On Windows, use
\\ as a separator:
logdir = C:\\Users\\username\\AppData\\myapp key = <YOUR LOGDNA INGESTION KEY>
logdir: sets the paths that the agent will monitor for new files. Multiple paths can be specified, separated by
,. Supports glob patterns + specific files. By default this option is set to monitor
.logand extensionless files under
exclude: sets files to exclude that would otherwise match what's set in
logdir. Multiple paths can be specified, separated by
,. Supports glob patterns + specific files
exclude_regex: filters out any log lines matching this pattern in any file. Should not include leading or trailing
key: your LogDNA Ingestion Key. You can obtain one by creating an account at LogDNA. Once logged in, click on the Gear icon, then Account Profile to find your key.
tags: tags can be used e.g. to separate data from production, staging, or autoscaling use cases
hostname: set this to override the os hostname
logdirpaths every minute
# YUM Repo echo "[logdna] name=LogDNA packages baseurl=https://repo.logdna.com/el6/ enabled=1 gpgcheck=1 gpgkey=https://repo.logdna.com/logdna.gpg" | sudo tee /etc/yum.repos.d/logdna.repo # APT Repo echo "deb https://repo.logdna.com stable main" | sudo tee /etc/apt/sources.list.d/logdna.list wget -O- https://repo.logdna.com/logdna.gpg | sudo apt-key add - sudo apt-get update
The LogDNA agent authenticates using your LogDNA Ingestion Key and opens a secure web socket to LogDNA's ingestion servers. It then 'tails' for new log data, as well as watches for new files added to your specific logging directories.
brew cask install logdna-cli logdna register <email>
Please see our documentation for Kubernetes instructions.
OpenShift logging requires a few additional steps over Kubernetes, but still pretty easy! Like Kubernetes, we extract pertinent metadata: pod name, container name, container id, namespace, project, and labels etc:
oc adm new-project --node-selector='' logdna-agent oc project logdna-agent oc create serviceaccount logdna-agent oc adm policy add-scc-to-user privileged system:serviceaccount:logdna-agent:logdna-agent oc create secret generic logdna-agent-key --from-literal=logdna-agent-key=<YOUR LOGDNA INGESTION KEY> oc create -f https://raw.githubusercontent.com/logdna/logdna-agent/master/logdna-agent-ds-os.yaml
This automatically installs a logdna-agent pod into each node in your cluster and ships stdout/stderr from all containers, both application logs
and node logs. Note: By default, the agent pod will collect logs from all namespaces on each node, including
kube-system. View your logs
at https://app.logdna.com. See YAML file for additional
options such as
oc adm new-projectmethod prevents having to adjust the project namespace's node-selector after creation.
JOURNALD=files, you may need to change this if you have changed OpenShift logging configuration.
If you're using a docker-less containerized (e.g. podman) operating system like Fedora CoreOS where logs are forwarded to journald, you will be unable to use logspout (if your OS does use Docker, refer to LogDNA LogSpout for instructions on how to set that up).
You can run logdna-agent inside a container to read from journald with a few
modifications. First, you'll need to set up systemd inside the container so
that it can read from
journalctl. Note that due to the systemd dependency,
you may have some difficulties starting from a base image like Alpine. Instead,
try starting from a distribution such as Debian or CentOS. The following
assumes that you're starting from the Ubuntu based image provided by LogDNA.
In your Containerfile, install systemd:
FROM logdna/logdna-agent # Install systemd so we can read logs via journalctl RUN apt-get update \ && apt-get install -y --no-install-recommends systemd \ && rm -rf /var/lib/apt/lists/*
Next, you need to ensure the
USEJOURNALD environment variable is set. If set
files, the agent will read from journald and forward the logs. The agent
can be configured either in your image or entrypoint.
By default the agent will read logs from the
/var/log. These logs won't be
very useful since they'll be referencing the container and not the host. You
logdir since the agent will still read from
logdir is missing from the configuration. Instead, to disable reading from
logdir to an empty directory (e.g.
/var/log/journal inside the container so the agent can use
journalctl to read logs. An example systemd service configuration could look
like the following:
[Unit] Description=LogDNA Forwarder After=network-online.target Wants=network-online.target [Service] Restart=on-failure ExecStartPre=-/bin/podman kill logdna ExecStartPre=-/bin/podman rm logdna ExecStartPre=/bin/podman pull my-custom/logdna-agent ExecStart=/bin/podman run -v /var/log/journal:/var/log/journal:z --name logdna my-custom/logdna-agent [Install] WantedBy=multi-user.target
The LogDNA agent can be installed through Chocolatey. You will need:
For more details, view our Windows Agent docs
Our paid plans start at $1.50/GB per month. Pay only for what you use, with no fixed data buckets. All paid plans include all features.
Contributions are always welcome. See the contributing guide to learn how you can help. Build instructions for the agent are also in the guide.