Contents

Storing honeypot data with elasticSearch and fileBeat

I really love elasticSearch. I’ve been using it for around 2 years now, both as a user and deploying/managing a fairly large cluster with good data ingest. So I knew elasticSearch would be the perfect data storage platform for HoneypotDB, my global Honeypot project.

For those of you that don’t know, elasticSearch is an incredible saleable, non-relational database that is built from the ground up for massive data ingest, while supporting advanced queries. Perfect for, well “You Know, for Search!” :D

Utilising Kibana, I’ll also be able to easily create some awesome visualisation, analysis dashboards and metrics with the collected data. It will also prove to be useful when designing mock-up visuals and APIs for the HoneyPotDB ;)

Starting off, HoneypotDB isn’t going to generate that much data, especially while I’m building it and during testing. So, a single elasticSearch node would be absolutely fine, however knowing what I know about elasticSearch, the thought of that makes me want to cry a little. So I’ll have 3 modes :D

My elasticSearch ‘cluster’ has 3 nodes, all configured as MASTER/DATA/INGEST nodes. This should do nicely for now.

Feed me data!

As mentioned in my previous post about deploying Cowrie SSH Honeypots with Ansible, I’m going to be using Filebeat to ship honeypot logs to 2 LogStash instances (fileBeat configured with load balancing) for analysis, and then to elasticSearch for indexing and storage. As a high-level overview, kinda like this

/6-shiping-honeypot-data-with-filebeat/Honeypot-Logs-flow.png
Honeypot Logs FLow

I’m looking forward to scaling these aspects of the data flow process as HoneypotDB grows :D

Example data and visualisations to follow :D