Resources to Help Identify Password Hash Formats

One question that I get asked a lot when I’m teaching the password cracking section in the SANS SEC504 class is “Once I get a password hash, how do I figure out what type of hash it is?” I mention a few resources in class but thought it would be worthwhile to put together a quick write-up to help past and future students after the class.

The first thing I always mention is that you will likely know exactly what type of hash it is based off how you acquired it. If you use meterpreter to dump hashes from a Windows system, grab the hashes from an /etc/shadow file or capture a hash using Responder, you know exactly what type of hash it is based on the method you used to capture it. If you obtained the hash from an encrypted file as I discussed in this blog post on the SANS pen test blog, you know exactly what type of hash it is.

With that out of the way, let’s talk about what to do when you’re not sure what type of hash it is.

Option 1: Have a program identify the hash for you

Some password cracking programs like John the Ripper will try to identify the hashes you ask it to crack for you, but it’s not always right.

Another option is called HashTag and is available here. HashTag is a python program that can look at a single hash or a text file full of hashes and attempt to identify them for you. It will generate a list of the hashes it found and what it thinks they could be.

It appears to detect 269 different hash formats and even includes a handy excel spreadsheet of those formats complete with examples.

Option 2: Check the Wiki that Hashcat maintains for examples

When you’re trying to figure out what a hash it’s, it’s always import to ask yourself what seems likely. If the hashes come from SQL injection attack against a custom web app running on an Apache server, LanMan hashes seem highly unlikely. In that scenario, options like md5 would be much more likely.

If you have an idea of what the hash might be, Hashcat maintains a fantastic wiki of example password hashes for different formats at: https://hashcat.net/wiki/doku.php?id=example_hashes

Option 3: Ask for help

Hashcat maintains a fairly active forum at https://hashcat.net/forum/. You ARE NOT allowed to post hashes in the forum (doing so is grounds for getting banned), but if you sanitize the hash you can post it, provide what details you can about the source, and ask if anyone has advice on what it is and how to deal with it. I’ve seen veterans go the extra mile on edge cases where things like a custom salt encoding were used.

Summary:

As I stated in the beginning, we usually have a really good idea of what the format of a hash is. If the hashes come from a custom web app or some other obscure source, we now have a few resources we can check so that we can correctly identify them, and more importantly, start cracking them 🙂

 

Persistent Monitoring on a Budget

I am a huge fan of Justin Seitz and his blog. Last holiday season he let me know that pastebin was having a sale on it’s API access, and that he was planning on using it in a future project. He came out with that project in April with a post where he used python, the pastebin API and a self hosted search engine called searx to make a keyword persistent monitor which shot out an alert email anytime it found a “hit”.

I was a huge fan of his post and made some modifications to his code in order to fit my needs. I’m much more of a hacker than a coder so I’m sure there are more eloquent ways to achieve what I did, but it’s been meeting my needs for several months, and has had multiple relevant finds. I was recently asked for a copy of my modifications so I thought it easiest to post it on github and write up a description here.

Mod 1: Dealing with spaces in search terms.

Early on I noticed that I would have fantastic results when looking for email addresses hitting pastebin and other sites but was getting quite a few false positives on names. I tested searx and it appeared to respect quotes in searches like google. i.e. searching Matt Edmondson will return pages that contain both “Matt” and “Edmondson”, regardless of if they are together. Searching for “Matt Edmondson” forces them to be adjacent. I made a minor modification to the code in the searx section to check each search term for spaces. If the term contains spaces, it places quotes around the term before searching it in searx. This modification did indeed help reduce false positives on multi-word search terms.

 

 

 

 

 

Mod 2: URLs I don’t care about

While my false positives were now lowered, I was still getting results from sites that were valid hits, but that I didn’t care about. I realized that for a lot of these sites, I would likely never care about any hits on them. I made a text file list of “noise URLs” containing entries like https://www.stupidsite.com . Anytime searx found a new hit, I had it check to see if the url contained anything from my noise URL list. If it didn’t, it proceeded as normal.

 

 

 

 

 

 

 

 

If however the searx find was in my noise URL file, the program would print “[+] Fake News!” to the screen and silently write the URL to a fake news text file instead of notifying me via email. This enabled me to reduce my noise while still having a place to go early on and see if I was ignoring anything that I shouldn’t be.

 

 

 

 

 

 

Mod 3: A picture is worth a thousand key words

Now that I was more satisfied with my signal/noise ratio, I decided to make the triage of notification emails more efficient by not just sending me the links to pages that contained my terms, but actually send me a picture of the page as well. This was easy to do, but did come at a cost.

I used PhatomJS to accomplish this task. Whenever the program found a hit in searx or on pastebin, the code would openup a PhantomJS browser, visit the URL, take a screenshot, and save it to a directory so that it could later be attached to my notification emails.

This provided a huge increase in my triaging speed since I didn’t necessarily have to visit the site, just look at a picture. It was also nice a few times when the sites causing my alerts were 3rd party sites which had been hacked and contained malware.

One negative with this was the increase in requirements needed on the system since PhatomJS needs quite a bit more RAM than a normal python script does. If you have this running on a physical system that you control, this is likely a non issue since the specs needed are still modest. If you’re using a provider like digital ocean however, I found that I needed to go from the $5 a month box to the $20 a month box before I achieved the “running for weeks unattended” stability that I desired.

Mod 4: Email Tweaks

The first tweak I made to the email section was an unbelievably minor one to allow for alerts to be sent to multiple email addresses instead of just one. I then had to modify the format of the email slightly to go from a plain text message, to a message with attachments.

As you can see in the code above, I have the send email function attach anything in the ./images subfolder (up to five items) and then delete everything in the folder so it’s clean for the next alert. The reason I limited it to five attachments was that it’s possible to get an email with a dozen or more alerts and if the pages are large, the screenshots will be large as well.

Trying to process a large number sizeable of attachments can cause the program to hang and affect my precious stability. Capping the number of attachments at five seemed like a good compromise since it allowed me to get screenshots 99% of the time and occasionally having to actually go click on a link like a barbarian 🙂

The next time I make mods to this, I’ll likely move all of the images to a cold storage directory which I’ll delete every week with a cron job. That way in those 1% cases where I lack a screenshot, I’ll still have one in the cold storage folder.

Once again, a HUGE hap tip to Justin Seitz. This would absolutely not exist in this form without him. I didn’t even know that searx was a thing until he introduced me to it.

A Script to Help Automate Windows Enumeration for Privilege Escalation

Often when I want to learn a skill, I’ll think up a project for myself that forces me to improve that skill. Recently I wanted to improve my Windows post exploitation and privilege escalation so I decided to work on a script to enumerate Windows systems to look for low hanging fruit that can be used to escalate privileges.

The definitive guide to Windows priv esc is http://www.fuzzysecurity.com/tutorials/16.html and a good deal of my commands come from that post or resources mentioned in the post. If you’re working on your Windows privilege escalation, you really should spend some time on that page.

I decided to use a batch file instead of PowerShell since batch should run anywhere and is easy for others to understand and modify. The output of the script is saved to three different text files. The script will be a work in progress, but I wanted to post a copy to try to help others automate the process.

First the script gathers basic enumeration information such as:

  • Hostname
  • Whoami
  • Username
  • net user info
  • syseminfo
  • mounted drives
  • path
  • tasklist /SVC

The script checks to see if .msi files are set to always install with elevated privlidges as well as for the presence of backup copies of the SAM for those juicy, juicy password hashes.

If accesschk.exe from sysinternals is present, the script uses it to check for services that can be modified by unprivileged users.

After a quick check for sysprep files which may contain creds, network information is gathered including

  • Ipconfig /all
  • Net use
  • Net share
  • Arp –a
  • Route print
  • Netstat –nao
  • Netsh firewall show state
  • Netsh firewall show config
  • Netsh wlan export profile key=clear (shows wifi networks and passwords that the system has connected to previously)

No privlidge escalation script would be complete without looking at scheduled tasks so we run

  • Schtasks /query /fo /LIST /v
  • Net start
  • driverquery

The script checks for any mention of “password” in the registry and then changes directories to c:\ . The reason for this change is it is getting ready to search the entire file system for files which may have credentials in them.

The results of the scans so far are saved to output.txt and a c:\temp directory is created for output of the next two text files of information.

The script checks for any file that contains “pass”, “cred”, “vnc” or “.config” in the file name. It then checks for a large number of .xml configuration files which may have creds including unattended install files.

The final file that the script creates is a tree list of all the files on the c:\ drive and the script ends by outputting any services which aren’t properly quotes and may be exploitable to the screen.

I recently had a chance to run this script and it GREATLY sped up the process of looking for low hanging fruit on a Windows system and helped me spot a password in the registry.

As I make modifications to the script I’ll post the updates here but you can download a copy of the script at: https://github.com/azmatt/windowsEnum

Python Script to Map Cell Tower Locations from an Android Device Report in Cellebrite

Recently Ed Michael showed me that Cellebrite now parses cell tower locations from several models of Android phones. He said that this information has been useful a few times but manually finding and mapping the cell tower locations by hand has been a pain in the butt. I figured that it should be easy enough to automate and Anaximander was born.

Anaximander consists of two python 2.7 scripts. One you only need to run once to dump the cell tower location information into a SQLite database and the second script you run each time to generate a Google Earth KML file with all of the cell tower locations on it. As an added bonus, the KML file also respects the timestamps in the file so modern versions of Google Earth will have a time slider bar across the top to let you create animated movies or only view results between a specific start and end time.

Step one is to acquire the cell tower location. For this we go to http://opencellid.org/ and sign up for a free API. Once we get the API key (instantly) we can download the latest repository of cell phone towers.

mapPic

Currently the tower data is around 2.2 GB and contained in a CSV file. Once that file downloads you can unzip it to a directory and run the dbFill.py script from Anaximander. The short and simple script creates a SQLite database named “cellTowers.sqlite” and inserts all of the records into that database. The process should take 3-4 minutes and the resulting database will be around 2.6 GB.

Once the database is populated, the next time you dump an Android device with Cellebrite and it extracts the cell towers from the phone, you’ll be ready to generate a map.

From The “Cell Towers” section of your Cellebrite results, export the results in “XML”. Place that xml file and the Anaximander.py file in the same directory as your cellTowers.sqlite database and then run Anaximander.py –t <YourCellebriteExport.xml> . The script will start parsing through the XML file to extract cell towers and query the SQLite database for the location of the tower. Due to the size of the database the queries can take a second or two each so the script can take a while to run if the report contains a large number of towers.

output

Ed was kind enough to provide two reports from different Android devices and both parsed with no issues. Once the script is finished it will let you know how many records it parsed and that it generated a KML file.

done

This is what the end results look like.

mapResults

The script can be downloaded from: https://github.com/azmatt/Anaximander

This is the first version and there are several improvements to make but I wanted to get a working script out to the community to alleviate the need for examiners to map the towers one at a time. Special thanks again to Ed Michael for the idea for this (and one other) script as well as for providing test data to validate the script.

How to Guide for Getting Kali Linux Set Up on an AWS Instance

I’ve been using a “jump box” on Digital Ocean for a few years now and recently decided that I wanted to set up a Kali Linux instance on AWS. I ran into a few hiccups getting it up and running, so I documented what worked for me here in the hopes of saving others time and headaches.

One of the first articles I came across was by primal security (whose podcast I absolutely LOVE) at http://www.primalsecurity.net/pentesting-in-the-cloud/ . There was some great stuff in this article. Unfortunately it relied on an AWS marketplace Kali Linux image, which is no longer available to new customers.

The next article I found was at http://sneakerhax.com/kali-linux-in-the-ec2-cloud/ . It was very close to what I needed, with a few exceptions, including: the default install for the Debian Jessie instance at the link they provided had a default main partition drive size of only 8GB, which was not enough for a full Kali Linux install. I learned that lesson the hard way when my install failed at the very end.

With a hat tip to the above resources, here are the steps needed to successfully install Kali Linux on AWS.

Go into your AWS console and select, “Launch Instance,” in the upper left hand corner.

mp2

Search for and select the Debian Jessie image from the AWS Marketplace.

mp3

Here you can select how many vCPUs and RAM you would like. (Admin note: I chose the medium with 2 CPUs and 4 GBs of RAM.) Make sure you hit the, “Next: Configure Instance Details,” so you can add more storage space.

The defaults on most pages should work fine for you, so click Next until get to the, “Step 4: Add Storage,” page. On this page, make sure you change the default size from 8 to at least 20 GBs. (Admin note: I went with 30 GBs.) After an install of Kali Linux Full, the drive will have around 10 GBs on it, so anything over 20 should be good for you.

mp4

Once you make that change, you are ready to launch your instance and SSH in. If you’ve never used AWS before, it may take you a few minutes to figure out how to access your box. After the first time, it’s quite easy. Make sure your security groups allow for SSH from your current IP address. The private key you generate should allow you to SSH onto the box – as the user “admin” – using just that key file for authentication.

Click the connect button in your AWS control panel instance window, and you will see some tips on how to access your box, including how to modify the key file for Putty, if you’re a Windows user.

Once you’re logged into the box, run sudo su in order to switch to the root user. Use the passwd command to create a password for root.

Next, add the Kali Linux source repositories. Typing vi /etc/apt/sources.list will let you access the sources.list file where you can then append the following lines onto the end.

deb http://http.kali.org/kali kali-rolling main contrib non-free
# For source package access, uncomment the following line
# deb-src http://http.kali.org/kali kali-rolling main contrib non-free

deb http://http.kali.org/kali sana main non-free contrib
deb http://security.kali.org/kali-security sana/updates main contrib non-free
# For source package access, uncomment the following line
# deb-src http://http.kali.org/kali sana main non-free contrib
# deb-src http://security.kali.org/kali-security sana/updates main contrib non-free

deb http://old.kali.org/kali moto main non-free contrib
# For source package access, uncomment the following line
# deb-src http://old.kali.org/kali moto main non-free contrib

Admin note: The source for these repositories is from the official kali site at http://docs.kali.org/general-use/kali-linux-sources-list-repositories

Once the repositories are in place, run apt-get update && apt-cache search kali-linux to get update information and show all of your Kali Linux install options.

Once that command is complete, you will see a list of about ten different flavors of Kali Linux available, including: minimal, top ten, and full. Of course, you want the full version (which is what you have in a normal Kali Linux VM) so run apt-get install kali-linux-full . This will likely take a while to run, but once it completed (hopefully without errors) you’ll have a working Kali Linux distro in the AWS cloud.

Admin note: There is a very real chance that you could encounter errors in these steps. If this happens, it’s no big deal. Ensure you added the correct lines to the sources.list file and then just rerun the last two apt-get commands. It may take an iteration or two, but it will eventually work and install successfully.

What better way is there to test a newly installed Kali instance than to type msfconsole ?

mp5

I tested my new instances connectivity by grabbing the public facing IP from the AWS control panel, opening up port 80 and hitting it from a web browser. That worked, but was boring. What was quite a bit more fun was firing up a Metasploit listener, putting the IP address into a Lan Turtle from Hak5, sticking that in a computer hooked up to a network and within a few seconds receiving a shell.

mp1

You now have a fully updated machine running Kali Linux sitting on the internet ready to go anytime you want it, for a total cost of a few dollars a month, as long as you remember to shut it down after you use it!

I can’t wait to collect more shells!

GREM Achievement Unlocked

I had been going through the SANS FOR610 Reverse Engineering Malware content OnDemand recently and last week I knocked out the GREM. I figured it would be a good time to post a few thoughts on it and talk about a few things that people can do to help prep for the course.

This was the first time in a while where I prepped for a GIAC exam without attending the course live. I was a bit worried about that with such a technical course but I ended up having a great experience. This was also the first time one of my courses has had the new style of OnDemand where the course was recorded in a professional studio instead of during a live class. The results were really nice and felt for intimate than I expected.

Every time a lab came up I would pause the course, work through the lab and then watch Lenny Zeltser’s walkthrough afterwards. He did a fantastic job of explaining things and going through the labs step by step. Even with dealing with advanced concepts, I never felt lost.

In a stroke of great timing, when I was about 75% through the course content I got a spear fishing email at work with an attachment. I checked it against virustotal.com and only 4/56 flagged it as malicious and there was no further information. I thought it would be a good chance to put my newly found skills to the test and examine the attachment. I fired up my two VMs and in a short amount of time I had a clear picture on what the malware was doing, had network based and host based IOCs and was walking through the code in a debugger examining how it was unpacking itself. It was great practice and a nice confirmation that what I had learned worked in the real world.

In prepping for the exam I had spoken to several friends who held the GREM certification. One of the biggest things someone can do to help prepare for the course is to get comfortable with assembly language and being able to watch/understand what the stack is doing within a debugger. The course teaches these things but if you’ve already been exposed to them you’ll feel a lot more comfortable and it will allow you to focus on learning other material.

Different people have different learning styles but this is one area where I think it’s really beneficial to watch someone walking through examples while they explain what’s going on. A fantastic free resource for getting exposed to Assembly Language is Vivek Ramachandran’s “Assembly Language Megaprimer for Linux” 11 part video series at SecurityTube.net (http://www.securitytube.net/groups?operation=view&groupId=5) or on YouTube (https://www.youtube.com/watch?v=K0g-twyhmQ4&list=PL6brsSrstzga43kcZRn6nbSi_GeXoZQhR). Vivek also has a cheap but not free “x86 Assembly Language and Shellcoding on Linux” series at PentesterAcademy.com which really helped me prepare for working on both Reverse Engineering and Exploit Development.

Overall I thought the FOR610 was a fantastic course and I got exactly what I wanted to get out of it.

How Long Do Truecrypt AES Keys Remain In Memory?

It’s been a bit since my last post and in that time I’ve been to two SANS conferences, Blackhat and Defcon. It’s been a great but busy few months.

A few weeks ago I was presenting at a local forensics meeting and was asked by an attendee if AES keys from Truecrypt remained in memory when the Truecrypt volume was dismounted. I replied that I was fairly certain they were flushed from memory when the volume was dismounted but that I hadn’t tested it. It’s a fairly simple thing to test so I made a mental note to test it when I had a chance.

I fired up a laptop running Truecrypt 7.2 on Windows 7. I used the new Magnet Forensics memory acquisition tool and acquired the memory on the laptop. I then mounted a Truecrypt volume on the laptop and then took a second memory image. Finally I dismounted the Truecrypt volume and immediately acquired the memory for a third time.

Obviously the first memory image didn’t have any Truecrypt AES keys since I hadn’t mounted the volume yet.
tc1

In the second memory image I used the Volatility “truecryptmaster” command to locate and display the Truecrypt AES key.

tc2

Finally for the big test I examined the third memory image which I acquired right after I dismounted the Truecrypt volume.

tc3

It appears as though the Truecrypt AES keys are indeed flushed from memory as soon as the volume is dismounted. I wanted to verify my findings using a different tool so I fired up Bulk Extractor and ran it on all three memory images. As you can see in the screenshot below the Truecrypt AES master key shown in the second Volatility examination is seen in the second memory image but not in the first or the third.

bulkExtractorVerify

This was a quick and simple experiment to verify what we thought was happening was actually happening.

A Quick Guide to Using Clutch 2.0 to Decrypt iOS Apps

A few days ago someone told me that they weren’t able to install Crackulous on their jailbroken iOS device and asked if I could recommend an alternative they could use to decrypt iOS apps. Since Crackulous was a GUI frontend for Clutch I recommended that they check out Clutch but when I went to find a good tutorial I could only find ones showing older versions of Clutch instead of the newer Clutch 2.0 RC2. It’s not quite as friendly as the older versions of Clutch but it works like a champ on my jailbroken iPhone 5S running 8.1.2.

The easiest way to get Clutch 2.0 RC2 on your jailbroken device is to add the iPhoneCake repository to cydia using the URL http://cydia.iphonecake.com . Once that repository is added you should be able to install Clutch 2.0 as show in the image below.

IMG_0001

Another option is downloading the sourcecode from the Clutch github repository at https://github.com/KJCracks/Clutch and compile it using Xcode with iOSOpenDev installed. Once Clutch2 was on my iOS device I used Putty to connect to it via SSH from my Windows system. I also could have used a terminal app on the iPhone itself and elevated my privileges to root.

 

 

 

Once I was connected I typed “Clutch2” which showed the following options:

Clutch1

Typing “Clutch2 –i” displayed all of the app store apps installed on the device:

clutch2

 

I decided to dump the third application (which I don’t want to display since I didn’t write the app) so I ran “Clutch2 –b <BundleID#>”. If I had wanted to dump the second app (WordPress) I would have typed “Clutch2 –b org.wordpress”. Clutch2 quickly generated the following output:

clutch3

 

The decrypted binary was placed under the /var/tmp/clutch directory. I used ifunbox to copy both the decrypted binary and the original binary (located in /var/mobile/Containers/Bundle/Application/xxxx) to my computer so I could compare the before and after results. Normally Mach-O executable files contain code for multiple arm architectures and you need to use the OSX command line tool “lipo” to extract the arm version that you would like to analyze but in this case the application only contained code for armv7 so that wasn’t necessary.

Below you can see where I ran file on an iOS app with multiple architectures (armv7s and armv7) and file on this application which only has one architecture.

mini

 

 

 

 

 

 

 

 

Once I confirmed that I wasn’t dealing with multiple architectures I used the strings command to extract the txt from both the original binary and the binary which Clutch2 produced. The original encrypted version is on the left and the post Clutch2 decrypted version is on the right.

compare

 

 

 

 

 

 

 

 

As you can see the decrypted version gives us quite a bit more information about what’s going on inside of the application and I can start to use the tricks I learned in the SANS SEC575 course to analyze the app and it’s behavior.

New Video Preview Utility to Help With Forensics Analysis

videoPreviewUtilityReportEarlier this week I walked into a friend’s office and he had just finished examining an iPhone using Cellebrite. The good news is that the acquisition went flawlessly. The bad news was that the device contained over 200 videos that he now needed to preview to see if any were relevant to his interests. As phones grow larger and larger (I have my 128GB iPhone sitting next to me as I type this) this is only going to become a bigger issue so I wanted to write a python script to try to help in situations like these.

The Video Preview Utility (for lack of a better name) utilizes the “Video Thumbnails Maker” tool to generate a sheet of evenly space thumbnail preview images for all video files in a directory. It copies all of those images to a subdirectory, creates an HTML file with all of the preview images and generates a log of all videos successfully processed and any videos where an error occurred. It’s a heck of a lot quicker to scroll down a HTML page looking for anything of interest than it is to click around in hundreds of videos.

Setup:
Note: These instructions cover Windows and assume Python 2.7.x is installed. No third party libraries were used.

Step 1 is to download and install Video Thumbnails Maker from http://www.suu-design.com/downloads.html. As I write this it’s the third program from the top on that page.

Step 2 is to add the directory where it installs to into your system’s path. The directory where it installed on my laptop was “C:\Users\Matt\AppData\Local\Video Thumbnails Maker”. Once you add it to your path and restart your machine it should be setup but you can check by opening up a command prompt and typing “VideoThumbnailsMaker.exe”. If the program starts then it’s in your path and our script will be able to find it no matter which directory you run it from.

Step 3 is to start Video Thumbnails Maker (this is the only time we will use it graphically), click on “environment” and deselect the “VTX (Video Thumbnails File)” Box. Leaving it selected wouldn’t hurt anything but it would place VTX files into your videos directory which aren’t needed.
pic1pic2

 

 

 

 

Once these steps are done you should be ready to run the script. Download videoPreviewUtility.py, unzip it, place it into a directory with your video files and run it. It will create a subdirectory with a name of the current date and time. It will then attempt to generate a video preview image for every .mov, .mp4, .avi and .wmv file in the directory. Adding additional file types or extension names should be as easy as adding another OR option on what is currently line number 24. As mentioned earlier, in addition to the preview images themselves it also generates an HTML report with all of the video previews and a log file with all successfully previewed videos and any which weren’t able to be previewed.

I’ll give this script some more testing with different types of dumps next week and will likely tweak it some but I wanted to get it up for anyone to play with. Friends don’t let friends watch 200+ videos!!!

Long Overdue 2015 Update

cc15badgeIt’s been an extremely busy start to the year but I wanted to make a quick post to talk about what I’ve been up to so far.

Last month I got to attend my first SANS DFIR specific event when I took the FOR508 with Rob Lee in Monterey. I’ve taken the 508 previously but this was a much needed refresher. As I’ve discussed in a few different articles the FOR408 focuses on analyzing activity on a Windows computer and the 508 builds upon that base to cover quickly triaging large numbers of systems remotely, a “greatest hits” of memory analysis, timeline automation and analysis, volume shadow copy analysis and covers deep dive artifact analysis on Windows systems like I’ve never seen covered anywhere else. The deep dive section may be things you don’t remember verbatim but the combination of being exposed to them and having the course books as a reference means you’ll quickly be able to analyze those artifacts when the time comes.

In addition to being my first DFIR specific conference, this was my first class with Rob Lee. He was funny, friendly and took the time to chat with students in class and online. Throughout the entire class Rob shared real world stories of exactly how what he was teaching us has been used out in the real world.

For the day 6 challenge Rob and the 572 instructor Phil Hagen tried something they had never tried before, they combined the classes! The data for the day six challenge for both classes is from the same event (508 students have the disk and memory artifacts and 572 students have the network artifacts) so their idea was that teams could work together with 508 students giving 572 students indicators to look for and 572 students helping answer what activity was going on. The plan worked flawlessly and everyone involved seemed to have a really good time. I was fortunate to have some brilliant individuals on my team and we won the challenge and the Lethal Forensicator coins 🙂

Monterey was a great time but as soon as I got back home it was back to the books. Back in December I answered the CactusCon call for papers with a proposal for my first ever public con talk. CactusCon called my bluff so this past Friday I gave a talk on “Getting Started with Memory Forensics”. There were approximately 40 people in the room for my talk and I received some great feedback afterwards. This was my first CactusCon and they did a fantastic job from start to finish. They had multiple tracks of talks, a Dave Kennedy keynote speech, a lockpick village and an area outside for attendees to solder the parts kits onto their badges. I had a great time and I’ve got nine months to come up with a good idea for a talk for the 2016 version.

That’s what’s been keeping me occupied so far this year. I’d say that now I can breathe a little but I doubt very seriously that it’s going to slow down.