Category Archives: Guide

Burp and Samurai-Web Testing Framework

The other day I came across a social media post that was about utilizing Burp Suite to identify vulnerabilities in web applications. I had heard of Burp before but never really had the chance to play around with it – until now.

Just like a lot of other security tools, Burp has a community version along with its commercial product. I decided to download the free edition from here in my home lab.  The installation process is straight forward and in no time you have Burp up and running. Here is how the initial interface looks like:


Right when I finished my installation of  Burp, I realized that I did not have a web application running in my lab that I could use to test Burp against. Bummer! Now I had to decide between setting up a web server myself or finding a commercial distribution that came pre-built with one. This was a no-brainer – and within minutes I found a few distributions that were designed for testing and learning web application security; such as: SamuraiWTF, WebGoat and Kali Web Application Metapackages. I decided to go with SamuraiWTF.

SamuraiWTF gives you the option to run from a live disk or install it in a VM. I decided to install the VM. Here is a good guide on the installation process. I give my VM instance 4GB RAM and 3 cores; more than enough horsepower.

This distribution comes pre-installed with Mutillidae, which is a “free, open source, deliberately vulnerable web-application providing a target for web-security enthusiasts”. This was perfect for what I was looking for. Setting up the Mutillidae in pretty simple – all I had to do was change my network configurations to NAT and that was it. However, if you need more information on configuration here are some great video guides on Mutillidae; in fact I used some of these myself while configuring Burp to work with Mutilliade.

After finishing all of the above prep work, I was ready to run Burp!

For those who are not familiar with Burp, its an interception proxy which sits between your browser and the web server and by doing so it is able to intercept requests/responses and provides you the ability to manipulate the content. To do so you have to configure Burp as your proxy. On your VM, this would be your local host (Proxy Tab > Options):


Likewise, you would have to configure your browser to that same proxy. Here is my proxy configuration on Firefox:

Firefox Proxy Configuration

Now as you navigate through your Mutilliadae webpage, all your requests should go through Burp. One thing you have to do is turn on the Intercept option in Burp. Its under Proxy > Intercept.

What this allows you to do is see the request as its made but gives you the control to either forward it to the web server or simply drop the request (like a typical MiTM). For example, on the login page of Mutilliade i used admin name and admin123 password. And as soon as I hit “Login” I saw the request being made from my browser to the webserver in Burp:

Burp Intercept

In the screenshot above, you can see the two options: Forward and Drop. If you hit forward, the web server will receive this request from your browser and will respond as it would normally. In this case, the account I used to login did not exist:

Web Server Respose

Burp has the capability to also capture the responses. It is an option that you can turn on by going to Proxy > Options and towards the middle of the page you will see “Intercept Server Responses”. By turning this on you will be able to see and control both sides of the requests:

Request and Response

If you look under Target > Site Map; on the left pane you will see list of all the sites that you have visited with the Burp proxy on:

History Map

One advantage of the above feature is that it allows you to go back and revisit requests and responses. The sites that are in grey color are those that are available on the target web page but you have not visited them.

Another neat feature is that if you do not want to visit each page individually you can run the “Spider” feature which will map the whole target page for you.


If you go under Spider > Control you are able to see the status of the Spider as it runs:

Spider Status

When you intercept request or response, you have the ability to send that to other features of Burp. You are able to view these additional options by right clicking on the intercept:

Intercept Additional Options

Towards the bottom of the official Burp Suite guide page here you can see a brief description of most of the options shown in the screenshot above. The one i found really neat is the “Repeater” option which allows you to modify and re-transmit requests repeatedly without having the need to perform new intercepts each time.

This concludes my brief journey of getting started with Burp using SamuraiWTF. There is whole lot more than I had the chance to explore but here is a great reference for advance topics.

Below  is a quick blurb on some of Burps features:

Spider: crawls the target and saves the numerous webpages that are on the target.

Intruder: automated attack feature which tries to automagically determine which parameters can be targeted i.e. fuzzing.

Fuzzing options: Sniper (fuzz each position one-by-one), Battering Ram (all positions on the target receive one payload), Pitchfork (each target position is fuzzed in parallel) Cluster Bomb (repeats through payloads for multiple positions at once).

Proxy: used to capture requests & responses to either just monitor or manipulate and replay.

Scope: controls what (pages, sites) is in/out of the test “scope”.

Repeater: manually resubmit requests/responses; allows modification.

Sequencer: used to detect predictability of session tokens using various built-in tests i.e. FIPS 140-2.

Decoder: allows encoding/decoding of the target data i.e. BASE64, Hex, Gzip, ASCII, etc.

Comparer: allows side-by-side analysis between two requests/responses.


Tagged , ,


How does it feel to know that your personal computer can be remotely controlled by someone without your knowledge for ill purposes? Or worse, instead of a single individual having this unauthorized access to your system it can be a group of people over the internet that control what your computer does and how it does it. In the field of Information Security, if your system is involved in such control it is considered a bot: a computer system being controlled by an automated malicious program. In addition, your computer system can be part of a larger group of infected computer systems and these collections of infected computers create botnets. Casually, these bots are also referred to as zombies and the remote controller is called the botmaster. So how are these bots born and grow into botnets?

According to Damballa, an independent security firm’s annual threat report, “at its peak in 2010, the total number of unique botnet victims grew by 654 percent, with an average incremental growth of 8 percent per week ”. Originally, these bots are developed by techsavvy criminals who develop the malicious bot code and then usually release in the open internet. While on the internet, the bot can perform numerous malicious functions based on its code design but it most cases it spreads itself across the internet by searching for vulnerable, unprotected computers to infect. After compromising victims’ computer, these bots quickly hide their presents in difficult to find locations, such as computer’s operating system files. The botmaster’s goal here is to maintain the compromised system behavior as normal as possible so the victim does not become suspicious. Common activities that bots perform at this stage involve registering themselves as trusted program in any anti-virus program that might be on victim’s computer. Moreover, to maintain persistence, bots add their operations in systems startup functions which results in bots automatically reactivating even after shutdown/restart. Throughout this process bots continue to report back to botmaster and wait for further instructions.

Below lists some of the common operations that bots can perform on behalf of its botmaster:

DoS (Denial of Service)
They send
– spam
– viruses
– spyware
They steal personal and private information and communicate it back to the malicious user:
– credit card numbers
– bank credentials
– other sensitive personal information
Launching denial of service (DoS) attacks against a specified target. Cybercriminals extort money from Web site owners, in exchange for regaining control of the compromised sites.
Fraudsters use bots to boost Web advertising billings by automatically clicking on Internet ad

As the chart above states, there are numerous functions that bots can perform. However, recently bots have mainly been used to conducted Distributed Denial of Service (DDoS) attacks: utilizing hundreds or thousands of bots from around the whole world against a single target.  Botmaster’s goal with DDoS is to use thousands of bots with numerous botnets to attempt to access the same resource simultaneously. This overwhelms the resource with thousands of requests per second thus making the resource unreachable. This inaccessibility of the resource has severe effects on legitimate users and requests. According to FBI, “botnet attacks have resulted in the overall loss of millions of dollars from financial institutions and other major U.S. businesses. They’ve also affected universities, hospitals, defense contractors, law enforcements, and all levels of government”.

A misconception exists that if your system does not hold any valuable information or if you do not use your system to conduct online financial transactions than an adversary is less likely to target your system. Unfortunately, as much as we would like this to be true, it is not the case. For botnets the most valuable element is your system’s storage and your internet speed. Our personal computers are now capable of storing and processing terabytes of information seamlessly and are able to use our high speed internet to transfer this information.  As stated by a malware researcher team from Dell SecureWorks, botnets “allows a single person or a group to leverage the power of lots of computers and lots of bandwidth that they wouldn’t be able to afford on their own”.


Tagged ,

Layered Security For Home User – Part 1

Most who work in information security are familiar with the term layered security (also known as layered defense) which in a nutshell mean that you employ multiple solutions/components to protect your assets. This idea has been pushed at enterprise level for a years and has been significantly effective at deterring attacks. And with the latest advancements in the end-point-monitoring (EPM) solutions, enterprises now have the capability to both monitor and control what happens on all of the workstations in the environment.

But if you move away from enterprise security to securing the average home user, most users tend to relay solely on the anti-virus solutions. Now, I am not going to get in the debate over how effective or ineffective anti-virus solutions are – but if you are interested in read rants over this topic feel free to do so. However, what I will say is that just having anti-virus software (specially now) definitely does not meet the layered security concept.

So, how do we get layered security for home computers? Well, the market is not shy from variety of different solutions that will promise to compliment your existing anti-virus while providing you the benefit of added security. And in my opinion some of these products can actually be beneficial such as malware, spyware and email protection but most of these features are already build-in to to latest anti-virus solutions – you may just not know it. So, the question still stands, how do we get layered security for home computers? Well, let me answer this by explaining a recent event where I had the opportunity to test a theory first hand….


Tagged , ,