Threat Hunting False Positives
What Is a False Positive?
In Threat Hunting, we’re looking for telltale signs of malicious activity; some kind of communication between our internal network and the outside world associated with an infected system. These include: beacons, long connections, large numbers of unique DNS lookups in a single domain, rare client signatures, and certificate issues.
These signs we get from a Threat Hunting tool include clear signs of both malicious activity and legitimate traffic. The legitimate traffic, included in the list of potential issues, is considered a false positive; traffic incorrectly flagged as potentially malicious. Your mission – should you choose to accept it – is to identify these and remove them from view so you won’t have to investigate them every day. By removing these from view and shutting down infected systems, you’ll take less time each day for Threat Hunting.
We’ll cover the different types of False Positives you might find in a Threat Hunting platform, and near the end, we’ll go over some specific approaches to handling them in a safe and effective way.
False Positive Types
Beacons and Strobes
Infected machines commonly “call home” by making an outbound connection to a Command and Control server to ask for instructions. This check-in happens regularly; while it may not be strictly every N seconds, it lets us categorize this as a Beacon.
The tricky part is that not every Beacon (regular connections between a pair of systems) is malicious. Here are some examples of non-malicious beacons:
- NTP (Network Time Protocol). Because the clocks found in computers are notoriously inaccurate (or entirely missing in some), the Internet has machines with highly accurate clocks (called Time servers) that allow other servers and regular PCs to regularly check-in and nudge their clocks forward or back to correct for drift. Note the phrase “regularly check-in”; this commonly happens around once every 15 minutes, and that makes this a Beacon.
- Network monitoring systems. These are also commonly set up to perform server checks (“Is the system live? Is the webserver on it running?”) every N minutes, so these checks also count as benign Beacons.
- Regular data transfers between systems
A connection between a client and a server was primarily designed to be a short-term thing; the client would connect to a server, place a request, receive a reply, and close the connection. Most conversations over the Internet follow this model. Threat Hunting packages will report on connections that are left open for hours because these are used by some malware to talk with their command and control servers and they slip under the radar of a Threat hunting approach that only looks for Beacons.
Here are some of the most common types of connections that can stay open for more than a few hours:
- Replication connection between a primary database server and a secondary/backup database server
- SSH connection from a developer/administrator machine to a server
- BGP connection between 2 routers
- VPN link between sites
- Long download connection
- Audio/video streaming
- Remote desktop connection
Threat Intel/Blacklisted IPs
Threat Intel lists are focused on listing IP addresses that have been involved in malicious activity at some point. The problem is that last part: “at some point”. It’s much easier to list an IP address that sent malicious traffic than it is to remove an IP address from the list. Do we know for sure that the attacker has stopped? Is the (apparently malicious) IP address just an innocent machine in the middle relaying traffic? Has the IP address been assigned to a completely different user? (This last question becomes even more important when the IP address is part of a cloud provider’s IP space as these can change ownership in just a few minutes).
These issues lead us to the opinion that traffic from an IP on a Threat Intel list should not be considered malicious for the sole reason of being blacklisted. It’s OK to use it as a supporting piece of evidence if there are other indications of malicious activity.
When a Threat Hunting tool inspects DNS traffic, it’s not looking for large numbers of requests or regularity of communication; it’s looking for large numbers of unique requests that all live inside a particular domain (such as a00007.example.com, a00008.example.com, a00009.example.com…). These DNS requests and replies are the carriers for the conversation between the infected system and the command and control server.
To see a False Positive in the DNS module, there would have to be a DNS domain that has hundreds or thousands of unique requests placed to it.
- Legitimate domains don’t do this; most commonly accessed domains will have far fewer than 25 requestable objects.
- Cloud providers and CDNs might have somewhere in the low hundreds of objects in a few domains, but even these should stay below 1000.
- A custom application could choose to communicate over DNS using a specific domain and generating unique requests to bypass caching. In that case you have the ability to whitelist the domain in question.
- Finally, there are domains used by specific network requests like hostname lookups for an IP address (ending in “ip6.arpa.” and “in-addr.arpa.”). These will show up if you make large numbers of these lookups, and can also be whitelisted.
Rare Client Signatures
When working with the Client Signatures module, you’re presented with rare User-Agent strings in view 1 and rare hashes (random-looking alphanumeric strings describing the way the client and server negotiate encryption options) in view 2. The logic is that the common strings will be the ones used by your legitimately installed applications and web browsers, while the rare ones may point to systems that have malicious code communicating over HTTP or HTTPS.
That simple view of the process is quickly muddied by the fact that a network of any appreciable size will have 1) systems running different versions of your legitimate operating systems, 2) systems running entirely different operating systems, 3) full programs, applets, plugins, and operating-system-specific tasks making outbound connections, 4) IOT and IIOT devices sending information back to external servers, 5) printers, cameras, and other network devices, and 6) employee-owned devices such as laptops, cell phones, and tablets. The HTTP and HTTPS connections made by these show up in the Client signature module, and commonly down near the rare end.
While it can take some time to track these down and categorize them as malicious or benign, once you do please whitelist the signatures (User-Agent strings or hashes) you’re confident are benign. Those will no longer show up in the report.
There’s one side aspect to this whitelisting. The Rare client signatures module may list one or more otherwise legitimate machine/operating system/application/plugin/system tools combinations that are either behind or ahead on their patching. You should be aware that if you whitelist one or more legitimate client signatures as they’re being used by legitimate applications now, there’s a chance that in the future these signatures may not be used after applications or operating systems are patched; the whitelist entries may hide them both now and after the patching. This is not a big issue – the ability to report on outdated and too advanced patch levels was a nice plus but not the primary goal of a Threat Hunting tool.
This module reports on certificates that appear to have problems, such as not being correctly signed by a certificate authority or have expired. You will occasionally see these types of issues on development or test servers which commonly get a self-signed certificate as no critical data is being stored on the server.
As with the other modules, AI-Hunter allows you to whitelist these development and test servers so they no longer show up in the report.
How to Handle These
Once you’ve investigated the particular traffic involved, you should have a reasonably good sense of whether the traffic is malicious, benign, or grey (somewhere in the middle of the others).
We want to focus on two types: 1) benign traffic and 2) grey traffic where you’re confident you have no interest in following up, now or ever. In both cases, we want to remove this traffic from view so we don’t have to re-analyze it tomorrow, and the next day, and the day after that…….
To do this, we have a few options, starting with the “safest” approaches (least likely to hide other malicious traffic) and working our way towards the “less safe” approaches (that may hide other malicious traffic).
Whitelist the Traffic
By entering this traffic in an internal whitelist, you remove it from view. Your Threat Hunting software still has to import it, store it, and calculate its threat potential, but you no longer need to see it. This has the advantage that you can always remove the address from your whitelist and immediately show it if needed.
We have multiple ways to whitelist these hosts:
AI-Hunter allows you to whitelist all connections between 2 IP addresses. This approach is perfect when the traffic is between two machines, such as a primary database server that hands a copy of all database changes to a secondary database server. By whitelisting the pair, we still do analysis on the non-replication traffic to and from other systems. This means that if either machine is infected we can still see that.
There will be times when we have a lot of internal machines communicating with a single external system. This would include the NTP server our systems use to keep their internal clocks synchronized. While it’s technically possible to use the “IP Pair” approach for “internal.ip.1 -> external.ip”, “internal.ip.2 -> external.ip”, “internal.ip.3 -> external.ip”, this can be a lot of work for an internal network with a lot of hosts.
If you’re confident the external system is not one that is likely to be involved in any malicious activity, we can whitelist that single host IP. By doing this, we remove any traffic to “external.ip” with a single whitelist entry. Since we haven’t whitelisted any internal systems, we still get to inspect non-NTP traffic to/from our internal machines in this example.
Subnet of Servers
In the above NTP example, we had the advantage that there is usually a single NTP server to which the internal systems synchronize (though in some cases there may be up to 3 NTP servers, leading to 3 whitelist entries above).
What happens if we have 250 database servers located in a data center we’d like to whitelist? If they’re on the same IP subnet, we can whitelist the entire subnet at once. Instead of whitelisting 220.127.116.11, 18.104.22.168, 22.214.171.124, etc., we can whitelist 126.96.36.199/24 and have a single whitelist entry that handles all of them.
The subnet approach becomes much more of a problem when there are banks of machines located in different data centers (possibly on different continents!) to whitelist. Trying to list them individually or by subnet is a lot of grunt work that we may be able to avoid.
When AI-Hunter shows an external IP, it also displays the ASN (Autonomous System Number) for it. This number is associated with the organization that owns the machines, and unlike a subnet, it can cover all the computers for that organization no matter what data center they’re in, what ISP they use, or in what continent they’re located.
Very large organizations can have more than one and we can use this to our advantage. Microsoft, for example, has registered multiple ASNs, but all their patching and time servers are located in ASN 8075. If I trust that those systems are well protected and unlikely to be infected, I can tell AI-Hunter to whitelist the entire ASN – that immediately removes any update checks and time sync checks made by my Windows machines!
You may come across one or more systems that are showing up repeatedly in your Threat reports. As an example, your network monitoring system might very well show up multiple times in your Beacons report as it makes regular connections to your servers and services to make sure they’re still working.
Whitelisting this or any other client/internal IP address is particularly dangerous. When you whitelist a client IP those regular connections disappear from your Beacons list, but you also blind yourself to any other malicious Beacon traffic from that IP. If you end up taking this approach, please make sure that the system in question 1) runs an operating system you trust, 2) has minimal listening ports, 3) ideally has no listening ports that can be reached from the outside world, and 4) is well patched, maintained, and monitored.
As a side note, be very careful when working with an IP address used by a proxy of any kind. Let’s say you have a web proxy that has 192.168.74.2 for an internal address and 188.8.131.52 as an external address. All your web clients know to send their HTTP and HTTPS requests to 192.168.74.2 and the proxy will then place a request from 184.108.40.206 for the actual web page/image. The danger here is that if you whitelist either IP you’re whitelisting all traffic going through the proxy, including some that may very well be malicious! We strongly discourage whitelisting by IP in this case, though you could consider whitelisting by IP pair (internal machine -> proxy) if there are a small number of systems that keep showing up as beacons here.
This note applies to other kinds of proxies as well. The most common types you’re likely to have include DNS servers and SMTP/Mail servers.
Caveat: Cloud Providers
Use subnet and ASN whitelisting sparingly when you’re confident that all the machines in that block are trusted (they’re well secured, all used for similar legitimate purposes, and connections to or from them are very likely to be benign). In particular, we strongly discourage whitelisting subnets or ASNs used at Cloud providers. The machines hosted at a cloud provider are by design a collection of computers with different owners, uses, and trust levels. Trying to whitelist a block of Cloud-hosted machines runs a very high risk of blinding yourself to malicious activity from another system in that block.
Whitelisting in RITA
RITA, our open-source threat hunting tool, does not have native whitelisting built into the program, but it is possible to whitelist specific IP addresses. Here’s how.
Create a text file of IP addresses you no longer wish to see in your Rita command line output. For example:
cat exclude.txt 220.127.116.11 18.104.22.168
We no longer want to see these addresses in our RITA output, so we remove lines containing these addresses:
rita show-beacons dataset | grep -v -w -F -f exclude.txt | less -S
The “grep” command removes (-v) all Rita output lines that contain any of the fixed strings (-F) in file (-f) exclude.txt . By using “-w” we require that these be “words” – there needs to be something other than letters or numbers immediately preceding and immediately following. Without this flag, we’d also throw away lines containing 22.214.171.124 and 126.96.36.199 .
This approach works for any of the Rita reports, not just show-beacons. If you wanted to have separate whitelists for each module, you could maintain separate whitelist files, like:
rita show-beacons dataset | grep -v -w -F -f beacons-exclude.txt rita show-long-connections dataset | grep -v -w -F -f long-exclude.txt
Filter the Traffic Out From Packet Capture
Whitelisting has 2 downsides. First, even though we no longer see the traffic, the software still has to import it, store it, and do the needed analysis on it. Second, your Threat Hunting package may not be able to whitelist based on port numbers.
This becomes a major problem on a heavily loaded network where we’re sure we have no interest in a few types of traffic. For instance, we may know that our internal network has very heavy data flow to and from a file server on multiple ports, and it also has heavy traffic between two VPN endpoints with openvpn. In this case, we can put in a filter that totally removes that traffic from the packets provided for analysis. Because the kernel never hands them up for processing at all, we no longer process the packets, store information about their connections, or have to score the traffic on its threat level. This reduction in processing frees up CPU time to work on the remaining traffic (which can also drive down packet loss) and storage needs.
To make this work with Zeek, we tell Zeek to ignore 1) all traffic to or from our file server (188.8.131.52) and 2) between our two VPN endpoints (184.108.40.206 and 220.127.116.11 on TCP port 1194 and UDP port 1194). The filter is placed in a “zeekargs=” line you’ll add to the end of zeekctl.cfg, like:
zeekargs=-f "(not host 18.104.22.168) and (not (host 22.214.171.124 and host 126.96.36.199 and port 1194))"
Once you run
sudo zeekctl deploy
both the file server and VPN traffic will no longer be passed up to Zeek.
The filter used above is called a BPF, or Berkeley Packet Filter. It’s flexible enough to allow you to specify traffic based on IP address, protocol, port, flags, and a wide range of other features of IP packets.
For more references to handling traffic:
The “tcpdump” manual page has some straightforward BPF examples in the EXAMPLES section:
The “pcap-filter” documentation page supplied on most Linux systems gives more detail on the BPF language:
Many thanks to Naomi and Keith for reviewing this and to Shelby for help with publishing it!
Interested in threat hunting tools? Check out AC-Hunter
Active Countermeasures is passionate about providing quality, educational content for the Infosec and Threat Hunting community. We appreciate your feedback so we can keep providing the type of content the community wants to see. Please feel free to Email Us with your ideas!
Bill has authored numerous articles and tools for client use. He also serves as a content author and faculty member at the SANS Institute, teaching the Linux System Administration, Perimeter Protection, Securing Linux and Unix, and Intrusion Detection tracks. Bill’s background is in network and operating system security; he was the chief architect of one commercial and two open source firewalls and is an active contributor to multiple projects in the Linux development effort. Bill’s articles and tools can be found in online journals and at http://github.com/activecm/ and http://www.stearns.org.