BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Beyond Blacklisting: Cyberdefense in the Era of Advanced Persistent Threats

Beyond Blacklisting: Cyberdefense in the Era of Advanced Persistent Threats

Bookmarks

This article first appeared in IEEE IT Professional magazine. IEEE IT Professional offers solid, peer-reviewed information about today's strategic technology issues. To meet the challenges of running reliable, flexible enterprises, IT managers and technical leads rely on IT Pro for state-of-the-art solutions.

 

Between reports of major retailers being breached, industrial espionage by advanced persistent threats, and sensitive data being held hostage by ransomware, it’s easy to see why many in the security industry have begun giving up on prevention and are instead focusing on detection and incident response.

Many of today’s defenses rely on lists of “known bad,” or blacklisting. Signature-based antivirus and IP address reputation feeds are examples of blacklisting technologies that are doomed to fail because it’s trivial for attackers to pivot through a new IP address or package a new executable. Nevertheless, companies continue to add devices to their networks that ultimately rely on blacklisting. Shifting the focus to detection and response won’t help the situation until we can reliably block the most common attacks. Furthermore, blacklisting IP addresses in the dawn of depleted IPv4 address space has become extremely difficult when tens of domains share the same IP address through content delivery networks.

CryptoLocker is an example of malware that has cost many businesses dearly. It’s commonly delivered via a phishing email containing a zipped executable. Upon execution, it installs itself into the App-Data folder in the current user’s profile for Windows systems. Next,it negotiates an encryption key over the Internet with its command-andcontrol servers, then encrypts any data it can access. Victims must pay a ransom to get the key to decrypt their data or recover from backup.1 Malware authors have tools that let them quickly generate new variants of their executables, rendering signature-based detection ineffective. Generating a new malicious payload costs next to nothing.

An implementation of application whitelisting on the end user’s computer would stop CryptoLocker dead in its tracks. Any executable that wasn’t on a known good whitelist would be prevented from executing. Unfortunately, whitelisting hasn’t gained much traction because of the misperception that implementing it is too costly and cumbersome. All enterprise versions of Windows client and server OSs include either Software Restriction Policies or App-Locker, so the additional software cost for a company is negligible. There’s some cost in terms of time and implementing a process. However, once whitelisting is in place, the maintenance cost is equal to or less than recovering from infections. More important, whitelisting dramatically reduces the risk of a major breach.

Whitelisting technology has improved since the days of Tripwire and other similar products that focused on using static hashes to identify changes. New technologies such as AppLocker can use signature data, file hashes, and path rules to create more flexible rules. For example, you can whitelist Java version 7.0.650.20 and later versions on the basis of a signature. Any new version that’s released and properly signed won’t need an updated rule in the whitelist. Additionally, we’ve found that although using path rules is technically not as secure as publisher or hash rules, it dramatically increases the difficulty for attackers, as long as nonadministrators can’t write to whitelisted paths. Attackers can’t just trick a user into running malware; they must use a privilege escalation vulnerability, increasing the cost to them. Path and publisher rules can enable weekend help desk staff to handle most common off-hours software installations without additions to the whitelist policy. Once clients are connected to the corporate network, virtual private network (VPN) access can allow them to update the policy for new applications that are being added for remote users.

The Cost of Infections

Consider the cost to recover from a simple malware infection. Many enterprises have rightly assumed that an infected computer can no longer be trusted and must be reimaged. Our environment uses System Center Configuration Manager to image computers. Backing up data from the infected PC, scanning that data to ensure it’s clean, reimaging the computer, loading the data back on the machine, and performing quality checks take at least two hours of technician time. Even if a loaner laptop was provided, the end user would still incur both downtime as computers were swapped and lost productivity when reconfiguring the reimaged computer to his or her liking. If technicians charge an average of US$25 per hour, and the billable hourly rate for the infected computer’s user is $50 per hour, the total incident cost is then $150, not taking into account the reimaging infrastructure’s cost or any other damages associated with the malware. The downtime could be less if the user can obtain a loaner computer. However, some productivity loss will likely occur during the transition, caused by transferring files and data from the infected machine and installing custom programs on the loaner.

Table 1. Whitelisting’s annual cost for an organization with 800 users.

The costs rise dramatically if the malware leads to a data breach. Computer forensics can cost hundreds of dollars an hour. If the entire network is compromised, which often happens in large-scale breaches, the bill from a reputable computer forensics company will be more than $100,000. The estimated total average cost of a data breach in 2014 is around $3.5 million.2 This is what every board and CEO fears— a breach resulting in the loss of sensitive, confidential information.

Whitelisting Costs

With whitelisting, the cost is transferred to the time the system administrator takes to whitelist a line of business applications. In our experience, whitelisting a new application normally takes less than half an hour. If a system administrator’s hourly rate is $50, proactively whitelisting an application costs the company at most $25. If a user incurs lost productivity waiting for an application to be whitelisted, that cost could double per incident. This is half the cost of the best-case scenario for recovering from an infection and greatly reduces the potential for a data breach. A sharp reduction of this potential is whitelisting’s true return on investment.

Table 1 compares the annual cost of reimaging machines that regularly get infected versus the cost of maintaining a whitelisting infrastructure. The statistics are based on the average incident rates before and after implementing whitelisting in our environment. This shows that, in our environment, whitelisting is no costlier to maintain than the status quo and can dramatically reduce the likelihood of a major breach.

The data we present here is from our own network and is based on our pay rates and work flows. We don’t have empirical data regarding the costs. We believe that companies could scale this on the basis of organizational units or benefits by employing whitelisting to protect only the most valuable assets or the workstations of users with access to the most valuable data. The main point of this article is mitigating risk. The cost of whitelisting versus the cost of reimaging isn’t the issue here and shouldn’t be the focus. Proactively whitelisting is no more expensive than reactively cleaning up machines. The real benefit is reducing the risk of a breach that would cost millions of dollars and reputational damage. We’ve implemented whitelisting in our environment to reduce the risk of a breach; we hope others will do the same.

Implementing Whitelisting

For systems that change infrequently, such as kiosk or pointof- sale terminals, you can apply a restricted AppLocker policy based on a gold image. This policy allows only executables originally in that image to run. Microsoft has published guidance on how to do this.3 For more dynamic endpoints, you can leverage default rules and restrict administrator privileges to limit execution to folders only administrators can write to. Publisher rules add further flexibility by allowing all signed code from a trusted vendor to run. This isn’t a silver bullet, and potential weaknesses exist, such as scripting languages or software exploitation, that you’ll still need to address. We presented ways to address most of this approach’s weaknesses at Shmoocon 2014. Sample group policy objects from that presentation are here, and a recording of the presentation can be found here.

Security best practices dictate that end users not run as administrators or even be given administrative rights on their own machines.4 Malware often gets on end-user machines by exploiting the end user through well-thought-out socialengineering tactics. Common ploys include asking the end user to click on a link, open a document, or directly install a program. Nothing can totally prevent end users from falling for these tactics. This ultimately means that, for computers and networks to stay malware free, every new piece of code that needs to run on a machine must be trusted or examined by someone who can determine its legitimacy. This idea tends to frighten most people in the industry, but it’s the most effective way to keep malware out of networks. This concept isn’t really that revolutionary; it just means enforcing policies and procedures that we IT security professionals have already written.

End users are issued machines containing a set of approved applications and software versions that have been tested for functionality and compatibility. Whitelisting requires identifying all programs that are allowed to run and explicitly allowing them to run by policy. You can easily do this with an extended monitoring period and some simple scripting. Once you’ve finalized this list, you flip the switch, and only those approved applications may run. Any new applications will be whitelisted either before they’re deployed to the environment or on a per-case basis if a user needs a specific application.

Updated whitelist rules can be quickly deployed to both local and remote machines via a VPN. Default rules allow any application installed in the %programfiles% or %windows% directories to run without needing the creation of additional rules. You can also leverage publisher rules to allow signed code from a trusted vendor to be installed and run without needing new rules. This lessens the effort needed to maintain the whitelist by only requiring updates for unsigned programs that run code from the user’s profile. Properly restricted users won’t have administrative rights, which will limit their ability to write to the %programfiles% or %windows% directories, thus limiting software installation. “Break the glass” local administrator accounts with unique passwords can be configured on each endpoint to allow remote installation with assistance from the help desk.

Whitelisting blocks both common infection vectors and common persistence techniques. If attackers can’t rely on their custom dropper files to execute, they’ll need to rely on software exploitation to remotely execute code. Exploitation will normally grant an attacker remote access, but he or she will lose access if the exploited process terminates or the system reboots. Attackers normally achieve persistence by installing a back door. They commonly place it in the user’s AppData folder because writing to that location doesn’t require administrative rights. However, if whitelisting prevents execution from any location other than the %programfiles% or %windows% directories, the back door will be blocked from execution. Attackers then must find an additional privilege escalation to maintain persistence.

Many malware kits leverage exploits for known vulnerabilities to infect users as they browse malicious websites. Regular patching of frequently used software can usually prevent this attack vector from being successful. If attackers can’t use known exploits, they’ll need to develop their own 0-day attack. This task isn’t trivial, as evidenced by the rewards in bug bounties and 0-day attacks’ cost on the black market. A reliable 0-day attack can cost more than $100,000. Attackers must be more judicious in using these exploits lest they become known and patched. A 0-day attack’s potential value must be greater than its cost for it to be worth implementing.

Software developers can make exploit development significantly more difficult by compiling their software to run with memoryhardening techniques such as DEP (data execution prevention), ASLR (address space layout randomization), and SEHOP (structured exception-handling overwrite protection). You can influence software companies by using utilities such as BinScope or recently released PowerShell scripts to determine whether executables were compiled with those techniques and then encouraging those companies to use those techniques.5 If a company won’t recompile, you can use Microsoft’s EMET (Enhanced Mitigation Experience Toolkit) to force programs to use these protections at run time. Researchers have proven that attackers can work around these protections, but this requires more time and effort. Articles and presentations detailing EMET bypasses garner attention in the security community because they’re still a novel concept, especially compared to run-of-the-mill antivirus bypasses.

Organizations that have installed and tuned a log management system such as a security information event management (SIEM) system know that a busy large network contains much noise. SIEM systems were created to collect logs from major systems and end users, normalize the logs, correlate events across vast numbers of log sources, and generate alerts for suspicious activities.6 If you use the steps we just mentioned to filter out low- and medium-level attacks, you can tune your SIEM system to look for those truly sophisticated events that try to bypass EMET, AppLocker, or your firewall policy and write intelligent rules that report attempted attacks and publish alerts. Even if these sophisticated attacks do disable your defenses and establish a presence in your environment, they’ll be more easily detected and short lived.

Whitelisting won’t solve all our security problems. Attackers can use exploits to compromise a machine if patches aren’t applied, and 0-day attacks can result in a compromise despite whitelisting. However, whitelisting still has value in limiting persistence options after exploitation; attackers will have much more difficulty gaining further access to privileged systems. The idea here is if you create enough obstacles, less determined attackers will move on to softer targets.

AppLocker does address scripts for built-in scripting languages such as the batch-file language, VBScript (Visual Basic Scripting Edition), and PowerShell, and you can use it to prevent the interpreter from being installed or run. Any whitelisting technology has many other nuances you must take into consideration—for example, macros in Microsoft Office documents and JavaScript in PDF files or Web browsers. Nevertheless, just whitelisting executable files and dynamic linked libraries will block almost all commodity malware and many advanced persistent threats. For example, a Mandiant report described a phishing email that linked to a ZIP file containing an executable file with a PDF icon.7 Norton AntiVirus didn’t detect that attack, but whitelisting would have blocked it.

If you can limit potential attackers to highly skilled individuals or organizations with the time and resources to develop attacks that could successfully bypass whitelisting and exploit mitigations, you’ll significantly lessen your risk and increase your chances of detecting these attackers. When you no longer need to worry about attackers who simply bought crimeware on the black market and sent a phishing message, you can dedicate more resources to detecting anomalous activity from advanced adversaries.

References

  1. “CryptoLocker—a New Ransomware Variant,” blog, Emsisoft, 10 Sept. 2013; http://blog.emsisoft. com/2013/09/10/cryptolocker-a -new-ransomware-variant.
  2. “Ponemon Institute Releases 2014 Cost of Data Breach: Global Analysis,” Ponemon Inst., 5 May 2014; www.ponemon.org/blog/ ponemon-institute-releases-2014 -cost-of-data-breach-global-analysis.
  3. AppLocker Design Guide, Microsoft, 2013; www.microsoft.com/en-us/ download/details.aspx?id=40330.
  4. Strategies to Mitigate Targeted Cyber Intrusions, Australian Signals Directorate, Feb. 2014; www.asd. gov.au/infosec/top35mitigationstrategies. htm.
  5. E. Gruber, “Verifying ASLR, DEP, and SafeSEH with PowerShell,” blog, NetSPI, 23 July 2014; www. netspi.com/blog/entryid/232/ verifying-aslr-dep-and-safeseh -with-powershell.
  6. A. Kibirkstis, “What Is the Role of a SIEM in Detecting Events of Interest?,” SANS Inst., Nov. 2009; www. sans.org/security-resources/idfaq/ siem.php.
  7. APT1: Exposing One of China’s Cyber Espionage Units, Mandiant, 2013.

About the Authors

Aaron Beuhring is a 13-year IT veteran and independent researcher. Contact him at abeuhrin@gmu.edu.

Kyle Salous is a 10-year information security veteran and an independent researcher, covering a broad spectrum of subjects. Contact him at ksalous@gmail.com.

 

This article first appeared in IEEE IT Professional magazine. IEEE IT Professional offers solid, peer-reviewed information about today's strategic technology issues. To meet the challenges of running reliable, flexible enterprises, IT managers and technical leads rely on IT Pro for state-of-the-art solutions.

Rate this Article

Adoption
Style

BT