Data security is the most serious IT challenge organizations face today. As everyone in IT knows, the devil is in the details. With current technology and best practices, it’s possible to deploy a system that is highly scalable, reliable, and secure. But you have to develop deep understanding and the configuration must be correct. Let’s consider the benefits of cloud computing, the security challenges presented by its adoption, and best practices for data protection.
The Promise of Hybrid Cloud
Recent announcements by companies such as Microsoft, Dell, and HP regarding greater solution support for hybrid cloud are an indicator of what IT operations teams already know. Namely, that some services must run in the cloud in order for the business to stay competitive. But many mission-critical services will remain on premise for the foreseeable future.
Making enterprise computing assets more available—via BYOD, cloud computing, etc.—has increased demand for those assets while decreasing the cost of providing them. This is great news for IT practitioners.
However, security policy enforcement in the hybrid cloud is challenging, since new data paths in and out of the enterprise network are inevitably being created. Plus, cloud computing is all about scale, so whatever security controls we put in place must be able to cope with large volumes of traffic and storage.
Cloud security challenge #1: Policy Drift
It’s not enough to deploy a secure software configuration. Without continuous enforcement, policy drift is inevitable. Software versions and computer configurations vary in subtle ways, even for a single provider or host. Over time, it is common for software and even hardware settings to change, due to operator error, incompatibilities, or side effects. Scanners can detect such issues, but it’s not enough to know about them—gaps must be remediated immediately in order to block opportunistic adversaries. Hence the need for enforcement.
Security configuration is especially tough in the hybrid cloud, because you’re fighting a battle on two fronts. On one front are the internet-based remote attackers. This is the front that we usually think about, but nevertheless the challenge lies in how quickly the internet threat landscape changes. Security configuration must evolve with the threats. Best practices for addressing these challenges include removing unnecessary software and services, separation of server roles, and enabling automated patching and reboots.
The second battlefront for maintaining secure configuration is internal IT operations. Let me give you an example. A JW Secure customer in the US Govt. is deploying Secure Boot and remote platform attestation based on standards that are defined by the Trusted Computing Group. But, surprisingly, the principal business need driving the deployment isn’t network device authorization. It turns out that IT datacenter operations staff perform system maintenance overnight, and sometimes make security-sensitive configuration changes such as BIOS updates without informing their peer teams. Platform attestation allows such changes to be detected remotely, because the cryptographic hash of the BIOS can be interrogated. Thus, attestation is a technology that can serve a dual role by not only preventing external untrusted connection attempts but also baselining internal system configuration.
Cloud security challenge #2: Key Management
Without careful key management, data encryption in the cloud buys you compliance and nothing more. Key management is hard, though, and even good key-management systems are only as strong as the authorization rules that protect them.
JW Secure recently worked with an enterprise IT shop that is under pressure from DevOps and the corporate Board of Directors. On one hand, DevOps is moving quickly to the cloud, including the adoption of services such as Microsoft SQL Azure with database encryption using Azure Key Vault. Data protection in the cloud is a good thing, except that corporate policy requires that corporate data be recoverable. When keys are under the sole control of an individual user, plus the fact that Azure Key Vault keys may be stored in a leased datacenter, the requirements of the corporate data recovery policy are not being met.
In fact, this problem is more general than key management, since part of the appeal of cloud computing to DevOps is the opportunity to bypass the in-house IT team. To help solve this problem, JW Secure worked with that company to implement reusable scripts that fully automate common DevOps data encryption scenarios. In other words, we helped make the developers more efficient, which is after all why they’re moving to the cloud. In addition, we implemented those scripts to perform key archival in a way that is compliant with the Board of Directors.
Here’s another example. With public cloud, DevOps interact on a daily basis with internet-facing APIs. Authentication is done using long-lived access keys that are often stored in plaintext files, shared between developers, and sometimes publicly exposed. This creates a significant security risk as possession of such credentials provides unconditional and permanent access to leased resources, which may yield catastrophic events in case of credentials compromise. Amazon Web Services offers two-factor authentication support for many user-driven cloud management scenarios; we recommend that customers adopt that capability wherever possible.
Cloud security challenge #3: Secure Computer Identities
Server identities expressed using Public Key Infrastructure (PKI) are an improvement over server identities authenticated using static passwords, but server-side PKI typically uses software-based keys that are easily exported. Forrester estimates that “70 percent to 80 percent of data breaches involve the use of privileged and administrative passwords and credentials (Unix root and Windows Administrator passwords).” Only hardware-based device identities should be allowed when valuable data are being stored or transmitted.
There’s a logical progression here. Anything is better than a static password. Cloud API keys are stronger, although they tend to be linked to online web accounts that are protected by weak user passwords (which get reused). Software certificates are better still, although their private keys can be exported, which is bad. Plus, PKI tends to increase operational costs.
Cloud HSMs are a hybrid in terms of security model, since they offer non-exportable keys, but in a way that usage of the key can be more difficult to control. On premise HSMs have the same shortcoming, in fact, with the further disadvantage that they tend to be expensive. (HSMs that are used exclusively offline mitigate some of these risks, although “sneaker net” protocols have vulnerabilities, too.)
This is why strong authentication of user accounts and computer accounts, plus hardware root of trust for cryptographic keys, are so important to data protection. In tandem, these policies complement and reinforce each other.
While server authentication is common in client connectivity protocols such as TLS and 802.1x, strong authentication is less commonly enforced in machine-to-machine scenarios such as accessing backend storage (e.g., SMB and SQL) and datacenter management (e.g., virtual-machine migration). Authorization must instead be enforced at every hop. Use standard based protocols such as IPSEC and SSH to achieve this.
Advanced cyber-attacks such as Pass the Hash and rootkits are increasingly prevalent in the datacenter and are difficult both to detect and to stop from spreading. According to the Microsoft Security Intelligence Report, “Even though current PtH methods concentrate on password hashes, similar attacks can (and do) work against other authentication mechanisms, including tokens, delegation, and two-factor authentication. If the attacker is able to capture the ultimate secret—be it a password hash, a token, or some other entity—the attack will succeed.” Server-side data protection must account for the integrity of the end-to-end operating environment when authorizing a connection request.
We recommend data protection based on a foundation of hardware-protected device identity. Then, building on that foundation, the following capabilities are required:
- Data-protection enforcement is local to each server. For example, the virtual TPM (Trusted Platform Module) capability supported by hypervisors such as Xen and Microsoft Hyper-V allows hardware protection policy to be cryptographically enforced for each server workload.
- Data are encrypted, at rest and in flight, until rendered for display.
- Data are rendered into plaintext only in a protected environment (that is, no unmanaged user devices or servers)
- Techniques for handling device or user termination or abort are robust.
Corporate boards of directors can no longer accept the legal risk of waiting for a major security incident before instituting proper defenses. Mandiant, Microsoft, and Verizon have all published sobering reports that point to the importance of staying vigilant: keep patching up to date, fix security bugs in internet-facing web apps, reduce the impact of phishing, slow lateral movement of intruders on your network, and respond quickly to attacks once they occur. Using hardware root of trust and continuous monitoring solves those problems by ensuring a broad range of policies are enforced at the time of authentication, thereby protecting all high-value network assets.
Risk assessment is something humans tend to do poorly. Careful, considered analysis of risks and assets and staying informed about online security are the only ways we can effectively prioritize and mitigate those risks. Just because cyber offense has an advantage over defense does not mean attacks cannot be stopped, or at least greatly slowed down. Security policies must be enforced.