This is the third in a series of Insights articles that detail challenges associated with proper Host/ Endpoint Data Management.
This is our third in a series of articles discussing host-based data management, and our first proposing the central plan for building a complete system. This entry took longer than expected, as our first attempt offered little independent merit. Rather than complicate the issue, we chose an alternate path that presents a real-world problem and proposes hypothetical, practical support.
We start simple, describing a corporate user attempting to access corrupted data files on a laptop computer. We continue with a possible/ hypothetical proceeding that describes functionality minimizing end-user disruption while also supporting IT needs for investigation and response.
We use this as our target, then introduce the baseline concept that delivers similar results. This generates a number of questions that help us recognize the challenges we must overcome with numerous innovations, described in upcoming articles. Before closing, we talk about the potential advantages using our proposed methods before finalizing practical system expectations.
Minimizing End-User Impact From Data Corruption/ Ransomware
Imagine that you sit down to your laptop, login, and attempt to open a presentation you've been working on. What happens if it's corrupted or...missing?
If you're working with a proper data management solution, you'd be able to find and Restore the latest version of your work, then continue forward. A proper system wouldn't have much impact on what you do or how you do it, but the history and data would be available to you most any time.
There are in fact endless ways you might arrive at that point, and one of them might even include notice that some if not all of your managed content has been affected, right when you login. Nonetheless, you should be able to Restore all working content and also see a Report of affected, Restored items.
In either case, you'd probably contact IT (though the data management solution should notify them on your behalf), and they'd probably ask you to give them your laptop (to investigate) and hand you a replacement. In a perfect world, the replacement would include your data. If not, in a self-service model, you'd be able to follow a link to quickly install a client applet, login (of course with 2FA), then the software would recognize the new environment and offer to populate your working set. When done properly, your information would be available quickly, and in the same folder structure as before.
Managing Corrupted End-User Data
If you're in IT, you have a very different focus. Your first step may be to find out if others have been affected, and the very same data management system should allow you to look at all managed users and their associated content to check Integrity and also Restore affected items (if not done automatically). A summary Report would tell you who had been affected and to what extent, allowing you to maintain operational priority or lockdown the system (rare).
In certain circumstances, you'd recognize a bigger problem and start to consider potential impact. What if it was possible to get a Summary of managed items that had been utilized in the past few weeks, across the board, with some notion of risk? What if the resulting risk was objectively based on usage?
In a well-designed system, one might be able to claim that unused content retains a high-degree of protection. With precision auditing that's created and managed in an isolated and secured fashion, increasing levels of use would represent increasing risk exposure, providing an Objective Disclosure Risk Summary representing the, "worst case scenario". This would be of exceptional value in prioritizing Response activity and scoping potential impact.
Any such Disclosure Risk Insight would also partition, "owned" data from, "Third Party" data, allowing you to share results with partners, customers, and/ or vendors at different times based on the need to coordinate investigation or report findings. Added control and insight would reduce overall costs.
Would you believe it if we told you this exists, today?
Moving Sensitive Operations to a Special-Purpose Environment
Everything you just read does in fact already exist. The system utilizes patented methods to offload sensitive operations to the cloud, providing the foundation to deliver presented proceedings. To understand how this is possible, let's back up a little bit and look at the underlying mechanism.
First, we want to obfuscate sensitive documents/ files/ email. Once in obfuscated - or protected - form, content can be subjected to unsecured systems with less concern for disclosure. This allows you to use internet sync and sharing services, free unsecured email, public Wi-Fi in coffee shops and airports, and even lets you share content directly to unprotected systems. All of these dynamics - and countless others - threaten host computing resources. If we properly obfuscate content - and find a way to control the act of recovering plaintext - we reduce the threat of disclosure when the host is compromised.
To achieve obfuscation, we use encryption. Encrypted content can't be interpreted until after it's decrypted, and this requires at the very least a single (symmetric) key. In today's usage dynamics, exposing decryption keys on a host computer is dangerous - even when utilizing locally-connected, specialized hardware. Though machine learning, AI, and a constant stream of new endpoint detection and prevention techniques continue to improve endpoint security, we know that attackers can - and will - break into companies, establish a presence, and over time, offload massive amounts of information.
To work against this, we move encryption and decryption to another place - a specialized, isolated place that's easier to secure than the dynamics of an, "open", flexible, user-friendly environment. This is in fact not new, instantiated by Smart Cards and specialized hardware like TPMs and Intel SGX. But these devices aren't yet available on all host computers, and they are generally resource-limited while presenting configuration and maintenance challenges. More importantly, they must be protected from attackers that impersonate authorized users - a problem we have to solve no matter what.
We can address these concerns while deriving additional benefits by moving encryption/ decryption to the cloud. This provides extensive resources we can bring to bear, but also presents many additional questions, for example: Can we do this securely, at scale, without significant impact to performance? Are we exposing content to other systems? Are we creating new opportunities for attackers? Can we really deliver a usable solution with this approach?
We should first look deeper into the potential advantages of this approach to consider trade-offs. Let's start with some basics.
Baseline Benefits of Offloading to the Cloud
Given the nature of today's threat, we can no longer afford to focus on protecting one aspect of a system - we now have to build an integrated solution that is designed to maintain consistency in compromised environments, reducing sensitive data exposure while also minimizing the impact of security events.
We can move this direction with layered defenses deployed to different aspects of a complete system. The market however has become saturated with, "technique" based products. Complete solutions require expensive integration comprised of design, planning, engineering, and deployment while also increasing the need for administrative staff and maintenance. Resulting complexity can create attacker opportunity, and such considerations are often beyond the reach of small and medium-sized companies - the ones most vulnerable to the catastrophic potential of a full-scale breach.
By offloading encryption and decryption to the cloud, we gain central control and many advantages, more importantly:
- We manage authorized data sharing by Policy, without, "encrypting for a user or group". This increases control while simplifying end-use.
- We store managed content, also available for secured Restore. This increases Availability (and Integrity) to combat Ransomware/ sabotage.
- We centralize accurate, detailed audit records. This creates a foundation for Analysis and Remediation with actionable insight.
This avoids point-product integration, potentially unifying controls into a single system that's easier to deploy and administer. We gain the possibility of built-in, reliable Backup/ Restore with the promise of secure, precision tracking for analyzing Disclosure Risk while retaining Integrity and Availability.
Two Incredible Challenges and Two Considerations
In our experience, this is the point in the conversation where over 90% of participants believe the approach isn't viable. Of the 10% who have identified ways to achieve meaningful relationships between, "secure tracking" and, "continuous host data protection", literally all but one have concluded that the effort required to innovate around open questions exceeds the value achieved in doing so (compare this to a general definition for security).*
We now know this isn't the case, but offer two fundamental challenges at the core (for different reasons):
- We have to apply controls while ciphertext is converted to plaintext then used in native host applications - with minimal user impact
- We have to isolate plaintext from cloud resources at all times, over all time, (i.e. value of disclosure is zero)*
Though we purposely exaggerated the second requirement (too much??), we don't impose stringent parameters on the first. We will discuss these issues and how we address them in our next offering. In the meantime, we ask you to consider two things:
- What happens if we employ a split-key model that requires an attacker to compromise the host and the cloud before ciphertext is at risk?
- What happens if we can maintain control of plaintext, independent of the application, on a host computer - backed by auditing certainty?
If we can do that, would you agree that we've managed to decouple storage requirements from disclosure liabilities? If so, what are the implications? What happens with a court order to your MSP for your data? Does that mean you then also have to be contacted before plaintext is available (hint: YES). And how does this impact shared use, BYOD computing, and flexibility?
Are these important issues? What if...
* Something can be determined to be Secure when the absolute cost of acquiring it exceeds the total value gained by doing so.
What We Can Expect to Accomplish
The scenario we offered is a real-world example that often results in significant impact to an Organization. When content is corrupted by Ransomware, some organizations choose to pay the fine and acquire decryption keys, hoping to retrieve data not available through their Backup/ Restore operations. Though that has often unlocked data, those who choose not to pay will be increasingly threatened with public disclosure - and that's a threat that never ends. For this - and many other reasons - the importance and need for proper user document, file, and email encryption is increasing.
This is but one of hundreds of different considerations, and related activities inhibit end-users from continuing productive efforts. This is having an increasing impact on business operations. That together with the lack of reliable insight into breach activity imposes unnecessarily high costs on Incident Response and Recovery operations that fail to deliver certain claims. The proposed system seeks to improve in these areas by:
- Minimizing information exposure when attackers find ways to bypass layered detection, prevention, and management defenses
- Increasing the ability to recover any and all managed data items, in whole or in part, on an existing or new host, or in offline Archive form
- Improving visibility into managed data access events - while also increasing certainty that report insight reflects real history
Can we deliver on these promises while addressing open questions? Send us a note at email@example.com and we'll show you exactly how.
This article was published January 29th, 2019