Although I’m not much of an academic, I really appreciate some of the great research – pure and applied – that is done every day to further our collective understanding and capabilities. Since I work a lot with the computer forensic sector, I often find some excellent research that happens to line up with something I’ve encountered for a case.
As a recent example, I’ve been tasked with identifying the forensic differences between VMware snapshots, but this is just a most recent example. Plug in your own mad libs.
After a short bit of digging, I came across a very promising paper from the Rochester Institute of Technology (RIT) that covers this exact situation. Others in the forensic community – some of whom I know and trust – also pointed to the paper as potentially valuable work.
The paper detailed a series of bash shell scripts the author wrote to accomplish exactly what I was after. “Great! This is a perfect starting point!”, says me. However, it soon became apparent that the author did not make those scripts available in any form other than text in an appendix of the PDF. I’m no slouch, and I can copypasta with the best of them, so I set out to do just that… Except that the formatting required some pretty heavy manual tweaking to get over 1300 lines of the script content into usable form. Now, I’ll be troubleshooting the transcribed scripts to ensure there were no errors introduced by the re-formatting process. Not an ideal use of time, and a significant hurdle to generating what I hope to be useful real-world experience with the author’s work.
So I offer the following advice to those in academia as you find and address problems:
- Keep doing great stuff! There are lots of us here in your respective communities that really appreciate your work and would love to provide real-world use case feedback.
- If you write programs, scripts, or other software-like proofs-of-concept: please, PLEASE provide the scripts themselves. Whether as a download link or even a pastebin URL, let us test your code without unnecessary hurdles or extra steps. Include the SHA checksums in your paper for some level of integrity validation.
Update, March 15, 2013: I’ve uploaded the scripts to github, and plan to make modifications and improvements there. The originals needed to be modified so they functioned in the SANS SIFT workstation. Aside from those changes and a few minor tweaks, they’re uploaded in near-original form. See the code here: https://github.com/philhagen/vmware-snapcompare.