Logstash: BFD (Big Forensic Data)

John Strand was kind enough to invite me to present on the most excellent Security Weekly show last week.  We talked about how Logstash (and the ELK stack as a whole) can be helpful to make sense of massive log data generally associated with network or disk/memory-based forensic examinations.

I also released the latest update to the SANS FOR572 Logstash VMware appliance.  Learn more about the distribution and download the latest version here.

If you missed the Security Weekly presentation, check out the video or download the slides.


  1. Phil, This is a great VM. Appreciate you and team putting this together. I am wondering if I can take the conf files from GitHub and apply them to an existing ELK install on RedHat Linux? Thoughts?

    1. Thanks, Paul! Glad you found it useful. There are a fair number of system-level configuration items that would make it difficult to apply to another installation, but it’s not impossible. We have some future pipeline items that may make this easier but there is no definite timetable on that front, unfortunately…

  2. So I am new to ELK (and generally the majority of this), and heard about the ELK VM. I booted it up, logged in and then accessed the Kibana Dashboard on my web browser. However, I am unable to search anything in the Discover tab. I have checked various websites and found it could be the time stamps and I changed it (from the top right corner). But still I have yet to find anything on it and still displays ‘No Results Found’. Can I get some help or some guidance. (Sorry, but I am relatively new to all of this)

    1. Hello, Antoni – glad that you’re working with ELK! The Elastic.co discussion forums are a better place for that kind of question, though. I am unable to provide support via this platform, and this sounds like an ELK question, beyond the SOF-ELK platform I provide.

        1. ah ok, that’s our VM. Wasn’t sure from your original message. That said, I’d still would prefer to avoid support requests via the blog, as it’s inefficient. You may want to check filesystem permissions on the source files, and ensure they are in the right directories. When parsed, the ES indices will appear in the /var/lib/elasticsearch/elasticsearch/nodes/0/indices/ directory.

          Aside from those options, I’m not sure what the problem would be.

          1. Also, bug reports are handled in Github. Support beyond the included documentation would be on a consulting basis at this time.

  3. Phil, I’m looking to read about a terabyte of ASA and Juniper sys log files into log stash. I’ve test a couple files, but the hostname is displayed incorrectly. The hostname is listed as the current date and the fields are rather F’d up in comparison to cat’ing the file . Can you please advise on a potential solution? On another note, I tried setting up ELK stack but I was not successful either. Seems like many moving parts and I’m not able get consistent reading of static files. The walk through instructions are geared more towards enterprise logging from different sources rather than a static ingest location. Are you familiar with any tutorials for the the newbs? If so, please share. Thank you…

    1. Hi, Anthony – you’ll definitely need to do some solid parsing work before attacking even a few GB of logs.

      I don’t know the structure of input data, but you can use the existing configuration files (in the VM or on github: http://for572.com/for572logstash-git) as a guide to parse your ASA and Juniper log entries. I use the grokdebugger (http://grokdebug.herokuapp.com), but there are other tools that will help as well. if you get them working, please consider submitting a PR with your solution (and, if at all possible, send me a sample of logs to test and validate).

      On the file ingest, our FOR572 Logstash VM (info and link at http://for572.com/logstash-readme) is pre-configured to ingest files from the filesystem, so I’m not sure what issues you may be seeing. I created this VM for two reasons: because the setup instructions are a bit convoluted, and to include some working configuration files that forensicators can use as a starting point.

      Hope that helps and good luck!

          1. I’m trying to update the logstash CentOS VM to version 2.2 as per your instructions but I am hung up on a couple steps. I initiated the git pull command, see the logstash-2.2 branch, and restart the service but didn’t actually see logs trash update. The version still shows as 1.4.3 when executing /opt/logstash/bin/logstash version. Am j missing some steps? I’m not to familiar with git so I’m at a bit of a loss. I’ve listed branches, merged the master and logstash-2.2 branches, verified there wasn’t differences (git diff), but logstash 1.4.3 is till listed. Please help

          2. oh updating definitely to the new ELK will definitely not work well or possibly at all – I apologize if I gave that impression. There is a new version of the VM that should be ready within a few weeks but the entire logstash-2.2 branch will likely cause major problems with the current version of the VM.

          3. OK. So I updated the logstash, kibana, elastisearch to version 2.3. Are you familiar with the default search index parser configs? I’d like to import the configs from 572 to my instance. I’d wait till the new VM is available, however, I need to jump on some logs now. Anyhow, your assistance is well appreciated

          4. Yes, but it took me a few dozen hours to get things working. I was not successful in getting this to work via an upgrade, only via a completely fresh install.

  4. Phil,
    thanks for sharing the VM distro for Logstash and other goodies. I do have a question about how process/index a standard apache log. Per your instructions, I placed an ‘access_log’ file from an Apache webserver into “/usr/local/logstash-httpd”.

    When I load the Kibana dashboard with the Logstash defaults, change the time frame to “a month ago to now”, I get an error saying “No results. There were no results because no indices were found that match your selected time frame”.

    The access_log has records from 12/31/14 in it, so the time span of 30 days should match, correct?

    I stopped and restarted the logstash process. No luck.

    Any pointers would be greatly appreciated.

    Happy New Year.

    1. Hi, Martin – the only thing that jumps to mind is that the files may not be readable by the logstash process. I recommend chmodding files/directories to at least 444 and 555, respectively. If the file does not show up on the default for572.json dashboard, logstash hasn’t processed it.

      1. Phil,

        After changing the URL to point to ‘dashboard/file/for572.json’ the access_log shows as a data source. I also chmodded the file to 444.

        So interestingly, the default ‘dashboard/file/logstash.json’ does not see/parse the access_log.

        Looks like I am good to go. Thanks for your help.

  5. Hi Phil,
    I have a question for you, I’ve tried to load your VM in 2 different versions of VMware WS (9&10) and to no avail. When I go to power on I get the following error message, “The configuration file “C:\Users\DBohatec\Downloads\FOR572 Xplico-Logstash 2014-12-18.vmwarevm\FOR572 Xplico-Logstash.vmwarevm\FOR572 Logstash.vmx” was created by a VMware product that is incompatible with this version of VMware Workstation and cannot be used.

    Cannot open the configuration file C:\Users\DBohatec\Downloads\FOR572 Xplico-Logstash 2014-12-18.vmwarevm\FOR572 Xplico-Logstash.vmwarevm\FOR572 Logstash.vmx.”

    Any clue as to the underlying cause of this? Has anyone else had similar issues running this VM? Any feedback would be great and have a great New Year!

    1. Hello, Darrell – thanks for checking out the distribution! I’m not sure what the hangup might be – this VM is designed for VMware Workstation 9 and above and we’ve used it successfully with Fusion, Workstation, and Player. Although I’m just guessing, have you fully extracted the contents of the zip file to your hard drive, or are you running them from the Windows Explorer “zip explorer” view? (The pathnames you pasted above lead me to believe it may be the latter case.) You’ll definitely need to fully extract the contents to your system before opening the .vmx file in VMware Workstation. Other than that, off the top of my head I’d first check the MD5 of the zip download, then the permissions of all the files you’ve extracted from the zip file.

Leave a comment

Your email address will not be published. Required fields are marked *