Kibaaana

Download :

get linux tar.gz format, put it in your home somewhere nice, and extract them with:

tar -xzvf <your stuff>.tar.gz

Also download a sample log file (or use your own if you feel brave enough). A sample file along with a nice tutorial (for logstash) can be found here: https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html

Start with filebeat.

Basically the config file is the “filebeat.yml” by default in its root directory. For example,

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - ../logstash-tutorial.log
output.logstash:
  hosts: ["localhost:5044"]
setup.kibana:
  host: "localhost:5601"

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

pushes the output to logstash. If you want to push it directly to elasticsearch, change the output field:

output.elasticsearch:
  hosts: ["localhost:5044"]

The stuff under filebeat.config.modules: is so as to be able to load modules for filebeat. Modules are supposed to enable nicer out-of-the box visualization, based on the source type. Your usual commands for modules:

./filebeat modules list
./filebeat modules enable apache

To start filebeat just execute it - no install or root rights necessary for this or the following (that’s the good thing about kibana and elastic search, but of course they are not the only visualization platform to offer this) - with this:

./filebeat -e

Also important: If you ever need to alter the filebeat yml config file to enable/disable something, then:

Setup logstash (optional)

Its config file is something like:

input {
    beats {
        port => "5044"
    }
}
# The filter part of this file is commented out to indicate that it is
# optional.
filter {
    grok {
        match => { "message" => "%{COMBINEDAPACHELOG}"}
    }
        geoip {
        source => "clientip"
    }
}
output {
        elasticsearch {
        hosts => [ "localhost:9200" ]
    }
}

This sets it up so that it reads from filebeat and pushes it to elasticsearch. Also uses a couple of plugins. Save this in logstash’s root directory under first-pipeline.conf, for example.

Then execute logstash: bin/logstash -f first-pipeline.conf --config.reload.automatic

Setup elasticsearch

Time for action. This – as far as I understood – provides a nice API to query your data and everything. Useful requests:

curl 'localhost:9200/_cat/indices?v'  to list indices

Indices are a reference to your data. Be nice with them because they are nasty. Basically indices are generated automatically based on filebeat’s output. For example, if filebeat’s output is set to logstash, an index logstash-* is generated by default. If it’s output is elasticsearch, you get a filebeat-* .

Take your Kibana out

After you extract, basically just run it with ./bin/kibana . Then, the idea is to go in its gui (localhost:5601) and:

The documentation is not very intuitive in my opinion, but here are some links to it:

After some hours with it, I wonder if it’s worth it or if I should just polish my awk and sed skills and should I really need to plot something, well, nothing some python code can’t do. Actually, maybe I will do it with python. Added to my list.