Pcap Paring

Intro

Filtering packets

Approaches

Full example

Additional notes

Resources

Compression tools

Intro

The first step in most network monitoring tasks is capturing network traffic from a network interface. In an ideal world where disk space and disk bandwidth are infinite, we’d save the files to disk as that allows us to run these packet files through multiple monitoring and forensics tools. Unfortunately, we run into the difficulty of saving  massive files. To save all the traffic from a fully saturated 1 gigabit/second ethernet link (1 gigabit/second in both directions) would require a constant stream of 250MB to disk every second and use up just under 22TB/day.

This commonly forces us to discard the files once they’ve been handed to all of the needed analyzers. There’s a middle ground, though, that allows you to keep some of the information without having to commit to massive disk purchases.

Instead of keeping the entire packet stream, let’s pare it down to just what we might need in the future. What might that include?

Filtering packets

What do we need to keep?

Let’s take a look at a few scenarios.

  1. We find an infected system on the network. The first question is how the attacker reached it. Could I go back to our packet logs and see what connections that machine has accepted just before the time of first infection?
  2. We get a subpoena for all information about a customer of ours. Can we use the information in the packet logs to identify any connections that customer made to our public servers?
  3. In response to a DoS attack, can we come up with statistics on what types of traffic are being thrown at us so we can know where to focus our mitigation efforts?
  4. A site on the Internet claims we’ve been attacking them. Could we use the packet logs to prove that we have not sent any traffic to them at all or learn that we did?

For the above scenarios, respectively, you’d need to keep:

  1. At least the SYN, SYN-ACK, FIN, and RST packets for all TCP connections.
  2. Like above, we’d need those same opening and closing packets to show that the client made connections at all; to show what happened on those connections, we’d need the Ack-only packets in the middle as well.
  3. Here we’d want as much of that traffic as we can handle. Since DDoS attacks tend to focus on a single type of traffic (though occasionally 2 or 3 types), we can capture just this/these ports as the attack starts and analyze the packets.
  4. Similar to 1, we’d want the opening and closing packets of all TCP conversations.
High level paring approach

You’ve probably already thought of a few I hadn’t considered. 🙂 That said, there’s a general approach to follow:
– Initial capture: this could be full packets or some subset that you need for initial analysis.
Immediately or soon after capture: compress pcap files to save space when practical.
– Early discard: once these pcap files have been processed by the tools you need, discard some portion of the data to save space but still accommodate the above scenarios.
– A few weeks/months in: Pare the files down again to the bare minimum needed to handle basic questions about network traffic.
– Even more months later: Delete files more than N months old.

What can we throw away after a while?

The right answer is this: If you look at a type of traffic and can say “There’s no chance I’ll need this more than N days after it was captured”, discard it after N days.

There are some general categories of things to discard:

  1. Chatty protocols like multicast DNS (UDP port 5353), netbios (UDP and TCP 137-139 and 445), NTP (123/UDP), and possibly DNS (UDP and TCP port 53). If you don’t feel these will be useful to you in the future, by all means discard them.
  2. In a TCP connection or UDP conversation, the bulk of the conversation is probably not of interest. In network analysis, you might want to hold on to the request (the early bytes going from client to server) and the headers of the response (the early bytes going from server to client), and discard the rest of the conversation. See Discard Acks after early ones if you want to keep these early bytes, and see Discard Acks if you don’t care about any of the payload. If you’re interested in the textual parts of the TCP connection but don’t want to save the middle portion, ngrep allows you to extract the pieces that look like text before you discard the Acks.

    In addition to the saved disk space, discarding the payload has the added benefit of privacy; in strict privacy settings, you may not be allowed to capture the conversation between client and server. By discarding that conversation immediately, you’re still left with the ability to identify that a conversation took place without conflicting with your organization’s policies.

Paring risk

One last warning before we get to actual paring. You’ll be making decisions on discarding traffic based on naive clues  like port number, using an assumption that – for example – every piece of traffic going over UDP port 53 is a benign DNS lookup or DNS response. Keep in mind that with tunneling, VPNs, encapsulation, steganography, and even the simple use of a different port, it’s possible to send almost any kind of traffic over almost any port. This makes that early analysis before you’ve discarded any traffic even more important; “hidden” traffic can no longer be detected once you’ve thrown away the packets in which it was hidden.

How to pare down

To do most of this we’ll use the “tcpdump” sniffer that comes standard on Linux, Mac OS, and other Unixes (windows note). We’ll use it in a mode where we tell it to read packets from pcap files, select only some of the packets, and write only those to an output file, like so:

nice -n 19 tcpdump -s 0 -r original.pcap -w filtered.pcap 'filter'

The ‘filter’ at the end is a Berkeley Packet Filter expression, or BPF for short. The string between the quotes states what packets should be kept; packets that do not match this filter will be discarded. We’ll use these filters below.

For other examples of BPF expressions, see the “EXAMPLES” section of the tcpdump man page, or “man pcap-filter” . pcap tools support this syntax, but note that wireshark and tshark use BPF only during capture but provide an (though richer) filter language for analysis.

Many of these commands have “nice -n 19” in front of them. This standard tool (included with Linux, Unix, and Mac OS, can be added to Windows) tells the computer to let other programs go first if there’s other work to do. This allows you to queue up lots of processing jobs without hurting any higher priority work (effectively everything except these nice’d processes.) On the other hand, if this system’s primary task is running jobs like these, remove “nice -n 19” in these commands.

Paring down compressed files

If the original file is compressed with bzip2 and you want to immediately recompress it after paring, use:

nice-n 19 bzcat original.pcap.bz2 | nice -n 19 tcpdump -s 0 -qtnp -r - -w - '{filter}' | 
nice -n 19 bzip2 -9 >filtered.pcap.bz2

The “bzcat” program decompresses the original compressed capture file, feeds that to tcpdump that reads from “-” (standard input) and writes the filtered packets to standard output “-w -” where the bzip2 program is ready to recompress it and save it back to a new filename.

Processing large numbers of pcaps

In the case where you’re dealing with tens of thousands of pcaps or more, it may not be possible to write a single to compress them like:

bzip2 -9 *.pcap

The list of pcaps may simply be too long to fit on a single command line. In that case, you can switch over to a loop:

ls -A1 | grep '\.pcap$' | while read onecap ; do bzip2 -9 "$onecap" ; done

This approach can be used for any command that’s expected to work with massive numbers of files.

Approaches

Compression

How: Use a tool such as zip, bzip2, or gzip to compress the file.

Pros: A compressed pcap file can take up <30% of the original file size. The file can be accessed without needing to uncompressing to disk first – see next paragraph. Ironically, accessing it may be faster as the CPU time needed to decompress may be less than the time needed to retrieve the uncompressed file from disk.

Cons: Compressing the file can take a lot of CPU time. To modify or read the file, it must be decompressed first.

Note that any task that would normally have wanted to read from a pcap file can probably be convinced to read directly from a compressed pcap file. In addition to taking less disk space, this uses less disk read bandwidth in exchange for a trivial amount of extra processing to decompress. As an example, replace:

tcpdump -r data1.pcap {other options}

with

bzcat data1.pcap.bz2 | tcpdump -r - {other options}

The following are some examples of ways to compress files. These examples are not intended to replace some well written articles giving comparisons of the time needed and final compressed size when using different compression tools. See https://www.google.com/search?q=compression+tool+benchmarks

Compression – fastest

How:

gzip -1 data1.pcap

Pros: This uses little CPU time to perform the compression.

Cons: The resulting compressed file could be 50% larger than one compressed with bzip2 -9 .

Compression – smallest

How:

bzip2 -9 data1.pcap

Pros: Better compression, so less disk space used.

Cons: Much more CPU time.

One approach that might make sense would be to use gzip -1 immediately after you’ve run the pcap file through your standard pcap tools, and then recompress it later with bzip2 -9 when the system is quieter or on another system with available processor time.

 Compression – using more than one processor

While most file compressors use a single processor core, “pbzip2” (a drop-in replacement for bzip2) can use as many processors as the system has. It’s “-p” option lets you set a maximum number of processors to use in case you want to limit this. When using N processors it’s almost N times as fast as using a single processor. The downsides are that the resulting file could be up to 0.2% larger than the same file compressed with bzip2 and the system may more RAM while compressing.

Paring

Now we’ll switch over to paring approaches that mostly use the tcpdump program to strip out unwanted parts of the pcap file. You’ll have to decide which – if any – of the following approaches are appropriate for your data retention requirements.

IPv4 only

This discards any IPv6 traffic. This might be useful on a network where systems support IPv6 but there’s no IPv6 connectivity to the outside world.

How:

tcpdump … 'ip'

IPv6 only

This discards any IPv4 traffic. This would allow you identify whether IPv6 is being used by any production servers and services.

How:

tcpdump … 'ip6'

Discard ACKs

More than 85% of the traffic flowing across the Internet is TCP traffic. A TCP connection begins with 3 early SYN, SYN/ACK and ACK packets with no payload, and ends with payload-less FIN or RST packets. All the packets in between those are ACK packets with their payloads holding the content going from client to server or server to client.

It might seem pointless to discard all those middle ACK packets – that’s everything said from client to server and all the server’s responses. The value here is that with the initial SYN and SYN/ACK packets and the FIN or RST packets at the end, we get the client and server IP addresses and ports, the start and end time, and the number of payload bytes transferred.

In short, this gives massive savings in disk space while still allowing you to know that a connection happened. The downside is that you can’t tell what data was sent or received.

How:

tcpdump … '(tcp[13] & 0x17 != 0x10) or not tcp'

There are two approaches that provide a middle ground between discarding all TCP payload and keeping it all. In the next section, we’ll look at keeping the earlier packets in the TCP conversation as that’s where much of the forensically interesting material is. That requires an extra tool, so if you’d prefer to stay with tcpdump, the “Shrink amount of payload stored” section keeps the beginnings of all packets. Think of the first approach as “Save the first 10 pages of this book and discard all later pages”, and the second as “Use a slicer to cut off the bottom 80% of every page and just save the the first few lines on each page.”

Discard ACKs after early ones

As we said in the “Discard Acks” section above, by discarding all acks we cannot see what request was placed or how the server responded (for example, the HTTP headers for “success” or “file not found”). By using a different tool, we can keep the early packets that commonly show request and response, but discard the ACKs that follow to save the majority of the space.

“pcap-modify” lets you keep the first C ACK packets in both directions of every connection seen in a pcap file with its “–ackcount C” parameter. It also allows you to ask to keep the first B bytes of every conversation with the “–ackbytes B” parameter. If you use both, packets that are either in the first C ACK packets or the first B bytes are kept. All remaining ACK packets are discarded. All SYN, SYN/ACK, FIN, and RST packets are kept. All non-TCP traffic is kept as well.
For reference, there’s a similar ability to keep early packets in UDP conversations with the “–udpcount Y” and “–udpbytes Z” parameters. These only affect UDP traffic.

How:

pcap-modify.py --ackcount 10 --ackbytes 4096 -r data1.pcap -w data1-earlypayload.pcap

Pros: Reduces disk use while still preserving the earlier packets that are likely to contain the protocol request and reply.

Cons: CPU and memory used.

Shrink amount of payload stored

Current versions of tcpdump capture the first 256K of a packet – in almost all circumstances that’s the entire packet as most network cables can’t even carry a packet that big.

One way to reduce the storage needs is to keep only the first N bytes of the payload. Add about 100 bytes to N for the headers at the beginning of the packet and you have what tcpdump calls the “snaplen”. Bytes that come after the first N+100 bytes are discarded.

While this does limit the storage needs from attempting to capture a DDoS attack with huge packets, it can make analysis difficult. By chopping off the ends of packets, it’s can be difficult to reconstruct the TCP payload. You might consider using pcap-modify instead.

How:

If you’re capturing packets from a network cable (as opposed to reading them out of a file), you can use the “-s snaplen” option with tcpdump:

tcpdump … -s snaplen

If you’re reading packets from a file, you’ll need to switch tools and use “editcap” (part of the Wireshark package that may be included with your operating system).

editcap … -s snaplen orig.pcap truncated.pcap

Pros: While this won’t save as much space as discarding all ack packets, it can give much of that benefit while still holding part of the conversation between client and server.

Cons: It is no longer possible to completely reconstruct the entire TCP conversation.

Discard specific protocols

BPF allows you to strip out entire classes of packets. The first 4 examples below are IP (specifically IPv4) protocols from /etc/protocols.

The last 3 examples show stripping out specific TCP or UDP ports (see “/etc/services”), letting you keep some kinds of TCP or UDP traffic while discarding less valuable or interesting conversations.

How:

tcpdump … 'not arp'
tcpdump … 'not egp'
tcpdump … 'not udp'
tcpdump … 'not tcp'
tcpdump … 'not tcp port 22'
tcpdump … 'not tcp port 80'
tcpdump … 'not udp port 5353'

“Known good” traffic – discarding specific protocols to specific hosts

One difficulty with “not tcp port 22” above is that we no longer keep any ssh traffic – good or bad. What we really want is to stop saving huge amounts of packets from known good ssh connections, such as the connections we use to transfer data daily from a business partner.

We can avoid saving these while still saving the packets to/from other hosts, allowing us to identify people scanning for open ports.

How:

tcpdump ... 'not (tcp port 22 and host download.partner.org) and not (tcp port 22 and host manager.example.org)
and not (tcp port 873 and host rsync.example.org)'

Pros: Assuming there’s a lot of data between us and these protocol/host pairs, we no longer save these to disk but still see attempts to use these ports from unauthorized users.

Cons: If one of these remote machines is hacked, the attacker might be able to come in over one of these protocols
without any logging at all.

Full example

Here is a sample script that manages a tree of pcap files, analyzing, paring and compressing as appropriate. It sets a simple lock so that it can be safely run out of cron without risking starting 2 copies at the same time. Script is at: http://www.stearns.org/pcap-modify/example-manage-pcaps

Additional notes

Paring during initial capture

Most sniffers and pcap analysis tools allow you to use the above filters not only during post-capture paring, but also when doing the capture in the first place. If you decided that you never wanted to grab the ACK packets in the first place, you could grab everything but ACK packets right off the wire with the following all on one line:

tcpdump -qtnp -s 0 -i eth0 -w /root/pcap/eth0.`hostname -s`.`date '+%Y%m%d%H%M%S'`.pcap '(tcp[13] & 0x17 !=
0x10) or not tcp'

Tradeoff

As with simple file compression, most of the above techniques trade additional 1) CPU time, 2) memory, and 3) disk bandwidth for a) reduced disk needs and b) the ability to perform at least basic forensics further into the past. Before moving forward with these approaches, take a survey of the system(s) that can access the pcap store; do they have enough available CPU, memory, and disk bandwidth to be able to do this paring?

tcpdump automatic rotation

tcpdump makes it straightforward to save packets to a file with the “-w” option we’ve been using above. It also recognizes that placing days or weeks worth of packets in a single file can produce a massive file that’s difficult to process, so it also offers the ability to “rotate” that file – stop saving to it once it crosses a certain size (“-C”) or after a certain number of seconds (“-G”). This simultaneously prevents files from getting too large or staying open for too long.

nice -n 19 sudo tcpdump -i en4 -qtap -C 50 -G 300 -w 'test.%Y%m%d%H%M%S.pcap' 'not port 22'

This rotates to a new output file when the old file would have crossed 50 million bytes (million, not mega) or has been open for 300 seconds. The “-w” option that specifies the output file lets you specify a time format to replace so each new file has its creation time embedded in the file name. The filename used above gives Year-Month-Day-Hour-Minute-Second. I got these on a test run:

...
test.20180705133513.pcap13
test.20180705133513.pcap14
test.20180705133513.pcap15
test.20180705134013.pcap
test.20180705134013.pcap1
test.20180705134013.pcap2
test.20180705134013.pcap3
test.20180705134013.pcap4
test.20180705134013.pcap5
test.20180705134513.pcap
test.20180705134513.pcap1
test.20180705134513.pcap2
...

This shows tcpdump changing the timestamp in the middle of the filename when it’s been capturing for more than 5 minutes, and appending a number to the end of the filename when it crosses 50 million bytes of data inside a single pcap.

tcpdump processing rotated files

tcpdump also makes it simple to process capture files when it switches to a new file. The “-z” option requires the name of a command to run when tcpdump rotates to a new output file; this command is given the name of the most recently closed capture file. For example, if you just wanted to do some light compression on the previous output file, you could add “-z gzip” to the tcpdump command line. While the new output file is being written, tcpdump will execute

gzip just_closed_output_file_name

in the background at low CPU priority.

Here’s an example:

sudo tcpdump -i en4 -qtap -C 50 -G 300 -w 'test.%Y%m%d%H%M%S.pcap' -z gzip 'not port 22'

If you’d like to do more complex processing of the previous save file, just make a shell script whose name you place after “-z” on the tcpdump command line. The file to process is the sole command line parameter.

ionice

On Linux systems, you can instruct the operating system to not only run these after other tasks have gotten all the CPU they need (the “nice -n 19” ‘s we used above), but you can also tell the operating system to make sure all other tasks have gotten all of the disk bandwidth they need first as well. This truly sets all of these paring tasks as ones that will not affect the other tasks on the system – doubly important when you may be capturing packets live.

To also use ionice, put

ionice -c 3

in front of any “nice -n 19” ‘s you see in above commands.

find … -mtime   in scripting

The “-mtime” option to the standard Linux/Unix “find” command lets you specify that you only want to look at files whose modification time is older than N days and/or younger than N days (you can also specify modification time in seconds, minutes, hours, or weeks by appending “s”, “m”, “h”, or “w” to the end of the number). Here’s an example of getting a directory listing of only files between 50 and 60 days old (files younger than 50 days old or older than 60 days old are ignored):

is -al `find pcap-modify/ -type f -size +0c -mtime +50 -mtime -60`

-rwxr-xr-x 1 wstearns wstearns 7278 May 8 18:25 pcap-modify/pcap-modify.py
-rwxr-xr-x 1 wstearns wstearns 7278 May 8 18:25 pcap-modify/pcap-modify.v0.4.py

For reference, the “-type f” requires that the object be a file, and “-size +0c” says the file size has to be
larger than 0 bytes.

Disk redundancy

For most types of data we want to have it stored on more than one physical disk, so if that disk dies we can still get the data from the second copy. You might consider placing the oldest pcap files on a storage area that only keeps a single copy of the data. That doubles your capacity, with the risk of losing those files if you lose one of the underlying drives.

Work off a policy

If your organization is large enough to need incident response capability, network monitoring, and storage for pcap files, it’s large enough to be governed by policies. Take the time to write up an addendum to your data retention policy stating for how long you’ll store full pcap captures, and at what time periods you’ll use the above techniques to pare them down and how. Get approvals, and then get paring!

Windows

tcpdump” does not come on windows systems, instead, use “windump”. When putting a filter on the command line, surround it with double quotes on windows instead of the single quotes used on Mac and Unixes:

tcpdump ... 'not port 22'

becomes:

 windump ... "not port 22"

Thanks

Many thanks to George Bakos, Chris Brenton, Melissa Bruno, Samuel Carroll, Logan Lembke, and Kris Chew for their time and help in putting this together. Also, thank you to John Strand and Active Countermeasures for sponsoring the time to write this.

Resources

IPv4 TCP/IP reference
https://www.sans.org/security-resources/tcpip.pdf

IPv6 TCP/IP reference
https://www.sans.org/security-resources/ipv6_tcpip_pocketguide.pdf

passer
http://www.stearns.org/passer/

pcap-modify
http://www.stearns.org/pcap-modify/

pbzip2
If not included with your operating system, see http://www.compression.ca/pbzip2/

Compression tools
https://en.wikipedia.org/wiki/Comparison_of_file_archivers
https://en.wikipedia.org/wiki/Lossless_compression#Lossless_compression_benchmarks

Schedule AI-Hunter Demo
Subscribe To Our Blog
Get email notification of our new blog posts and our occasional emails about events, webcasts and product updates. We are not spammy and you can unsubscribe at any time.