In this article, the second of two, we describe how to protect against the potential for widespread cybersecurity attacks that steal sensitive data to increase monetary leverage.
Introduction
In the first part of this two part article, we made reference to a shifting threat dynamic where Ransomware first steals data before encrypting it, then uses recovered sensitive information for further financial gain. Though we have current evidence of a significant public disclosure threat, it seems likely we'll see many more monetization methods. A quick look at Uber and Lyft provides more than adequate ammunition for growth in espionage-for-hire services.
But today, a majority of threats prey on weak security controls and poorly managed systems. As rightly noted by Lesley Carhart in her blog, Why NotPetya Kept Me Awake, "A good hacker avoids the use of malware and code exploits whenever possible". The damage this week's attacks were able to inflict in the spirit of this message is surely great cause for concern.
And on the flip side, controls that are effective today won't be tomorrow. This article investigates these realities and explains what can and should be done for more suitable protection. Without greater consideration and deployment, this week may be the start of a more common, ongoing threat dynamic.
Zero-Hour Protection
A wide variety of protective solutions held zero-hour protection, meaning an updated installation would have stopped the malware from inflicting damage the moment it was released. These include anti-malware and anti-ransomware technologies, machine-learning detection and prevention software, and host protection, "suites" common with anti-virus vendors, which often includes technologies tailored to stop these threats.
This attack's success was based largely on ineffective (or non-existent) controls in affected systems, and although the malware did include exploits for vulnerabilities recently patched by Microsoft, a good bit of the damage was done without any particularly creative exploit code.
This does not mean it's safe to assume these technologies are highly capable or particularly effective. They are very good at stopping threats like these, which depend on poorly configured or out of date targets. More advanced malware that makes use of 0-day exploits - those that target vulnerabilities in software not yet known by its' producers - would have bypassed at least some of these systems, though to what extent depends on a variety of factors.
Public Disclosure Threats, Espionage for Hire - What's Next?
Ransomware and similar malware campaigns are here to stay due to the high value realized in breaching networks. As more prominent targets become evident, at least some attackers will shift their approach to these higher-value opportunities. Does this lead to more common threats of public disclosure? It's hard to fathom a world where a victim pays a thief that promises not to engage in further dishonest behavior because, "this time I really mean it", but as noted we've already seen public threats and it seems likely they will continue on some level. It's hard, however, to prove you didn't share stolen data.
But no matter how we view it or what we consider, we know that a shift from Ransomware encryption to acquisition before encryption does not present overbearing technical challenges. This changes the dynamics of an attack, but it can be managed with adjustments. Is it enough to threaten immediate file deletion if outbound connections are severed? Or will we instead see a growth of automated, "low and slow" offloading that automates the process of acquiring massive amounts of data to support espionage for hire services?
Each of these considerations holds one thing in common - the reality that host application data files are largely unprotected. By applying protections at the data level, we can provide a last-chance protection that, when layered with existing and emerging protections, may provide a more daunting protective posture than some have yet to consider.
Traditional Encryption and Localhost Crypto Protection
A good deal of free and consumer encryption software manages data locally, storing encryption/ decryption keys on the local host. When an attacker acquires localhost credentials, which is the case with the Petya/ NotPetya malware, it provides access to these resources. Special-purpose malware designed to identify and offload encrypted content and decryption keys for offline access quickly defeats what is otherwise a false sense of security.
More capable business encryption software avoids at least some of this with additional protection for local decryption keys. This can be done using local hardware (a TPM, present on high-end hosts but become more prevalent as prices decline), or it can use additional hardware attached via USB ports. Some of this hardware - such as SmartCards - allows for complete offloading of sensitive cryptographic operations, isolating keys from unauthorized users. However, authorization is often based on local credentials, and again, malware quickly acquires these credentials and can be purposed to work around these protections. Though specific, it's not hard - and organized crime can quickly, "differentiate" by investing in libraries designed to defeat known protections.
The variability of these solutions, the extent to which they are and are not effective, and the degree of difficulty required to automate a Ransomware-like campaign would fill a good bit of a short book. Suffice it to say, at one end, we have simple solutions that offer very little value except against those that have low skills, and on the other end, we have the idea of providing a complete solution that makes the most of granular two-factor authentication and crypto offloading to deliver an integrated solution that takes each of these issues into consideration.
Two-Factor Authentication
Two-factor authentication is useful in stopping a localhost service, with proper credentials, from carrying out a task. This is due to the fact that a 2nd authentication factor will require something other than a password - a USB token or a phone.
In truth, a lot of systems don't get this right, not taking it far enough. For details, see our 2FA article posted in Spiceworks Spotlight on IT. In short, many systems use the 2nd-factor to, "unlock an archive", and an attacker only need wait for the authorized user to Login before content becomes available. In other cases, the 2nd-factor is a USB key that remains plugged into the host, offering no protection during that time. An effective solution is a fine-grained 2nd-factor application that requires a physical presence, such as that offered by a button on a USB key. This way, the key's presence is meaningless until a user touches the key - and only then will a single 2nd-factor credential be offered, which should trigger offloaded decryption.
Some Requirements for Effective Host Data Protection
Ultimately, we're talking about a very closely choreographic integration of offloaded data encryption, extensive key management, and access controls using fine-grained 2nd-factor authentication. What does a software package with all of that look like? Hint: SSProtect.
But for more consideration, here are some additional requirements for a proper solution (though this list is by no means complete):
Isolate Sensitive Operations
Encryption and decryption of sensitive content must take place somewhere other than a potentially compromised end-user host, in an environment specifically designed for security, and preferably one that is monitored 24/ 7. This moves sensitive operations out of an attacker's reach, protecting sensitive materials used to access other sensitive materials. This process should provide interim materials that are then moved back to the end-user host to be finalized for consumption. This interim result protects end-users from exposing content to the provider.
2FA With Physical Presence
Each time sensitive materials are accessed, protective systems must require the use of a 2-factor authentication process that requires physical action. This inhibits attackers from compromising ineffective designs, which are widely distributed. One example is with the use of mobile authentication apps, which can be (and are) defeated, unless they use NFC and a physical token, in which case they are usually suitable.
Independent Credentials
Credentials establishing a viable end-user session with encryption software must be independent from Windows credentials, since Windows credentials are almost immediately compromised. Note that keyloggers can be used to defeat this independent password, which is the purpose for 2-factor authentication applied in granular fashion.
Continuous In-Use Plaintext Protection
Plaintext content must be continuously protected from anyone or anything different from the authenticating resource. This blocks attackers that wait for authorized end-users to authenticate plaintext access, then steal data. If protective software only releases plaintext to end-user authorized applications, and the applications are not malicious (checked using digital signatures), the attacker must then break into the application itself. This increases the required talent for acquiring data, and expands as different applications are used.
Re-encryption must close continuous protections by offloading operations as noted in the first point.
Central, Accurate, Precise Auditing
All access must be audited by offloading systems to retain secure, accurate records of data disclosure. This is the foundation for SSProtect :Respond Definitive Disclosure Risk insight.
There's Nothing to See Here
None of the noted techniques are new - many have been around for decades. The realities are in the realization of a system combining all of these into a practical solution that is usable, effective, and easy to manage.
There are, today, some choices that get you partway into the game, though in our experience - and in fact why we formed DefiniSec - each suffers considerable challenges in practical deployment, whether it be in the massive IT commitment to a new infrastructure that supports distributed deployment, or it be with the cost of the system, the incompatibility with existing applications, or the inability to scale. Most every system we had reviewed prior to forming DefiniSec suffer more than one shortcoming with regards to what's required, and that vendors have chosen not to invest in this area is likely more a matter of the threat and existing customers than any inability to do so.
We believe this will change in short order, as these threats shift and begin to make more effective use of sensitive information. We've already seen countless companies go out of business as a result of intellectual property loss, and some large organizations that have literally disappeared have done so as a result of these types of attacks. You won't read this in the news because the host country hasn't had reporting requirements beyond those that impact customer information. As a result, a company that suffers critical IP loss and suffers in global competition doesn't have to tell you why. There are many more stories than available to the public domain, as a result of classified proceedings and due to the integrity of those of us working on the front lines (who are often the sacrificial lambs, but that's part of our reality and gets offset in other ways).
Independent of which solution you choose, we strongly encourage members of the community - end-users, practitioners, and vendors - to revisit these realities and consider building these protective controls into their capabilities. Though today our solution effectively extends the security of existing applications and is simple to deploy, there are solutions that serve other purposes who would greatly benefit from these technologies.
In every way, we are here to help. Please send questions, comments, suggestions, or inquiries to support@definisec.com.
This article was published June 29th, 2017