2014-11-25

The good old days: Monitoring company users on company owned PCs, who accessed the company data center across corporate owned networks. You knew where everything was and who was using it. And since the company owned it, you could pretty much dictate where and how you performed security monitoring. With cloud and mobile? Not so much.

If you embrace cloud computing, you’re going to need to embrace new approaches to collecting event data if you hope to continue security monitoring. The sources, and the information they contain, is different. Equally important — although initially a more subtle — is how you deploy monitoring services. Deployment architectures are critical to deploying and scaling any Security Operations Center; defining how you manage security monitoring infrastructure, and what event data you can capture. Furthermore, how you deploy the SOC platform impacts performance and data management. There are a number of different solution architectures to meet the use cases we outlined in the last post. Thus we’ll now focus on the alternative means of deploying collectors in the cloud(s) and the possibility of using a cloud security gateway as a monitoring point. Then we take a look at the basic cloud deployment models for a SOC architected to monitor the hybrid cloud, focusing on how we manage pools of event data coming from distributed environments, both internal and external to the organization.

Data collection strategies

API: Automated, elastic and self-service are intrinsic characteristics for cloud computing. Most cloud service providers offer a management dashboard for convenience (and unsophisticated users), but advanced cloud features are typically exposed only via scripts and programs. Application Programming Interfaces - APIs for short - are the default interfaces provided by the cloud services, and are essential to configure the cloud environment, start monitoring services and gather data. These APIs can be called from any program or service, running either on-premise or within a cloud environment. In other words, APIs are the cloud analogy for agents, providing many of the same capabilities in cloud environment where a ‘platform’ is an virtualized abstraction and it’s not possible to insert a traditional “agent” into the cloud infrastructure. API calls may return data in a variety of ways, including the familiar syslog format, a JSON file, or even a cloud-provider specific format. Regardless, making API calls and aggregating the resulting data will be one of the new sources of information you need to monitor the hybrid cloud.

** Cloud Gateways:** Monitoring Hybrid clouds commonly rely upon gateways – typically an appliance deployed at the ‘edge’ of the network to collect events. Leveraging the existing infrastructure for data management and SOC interfaces, this approach forces all cloud usage first authenticates to the cloud gateway, which acts as a choke point, and then after inspection the traffic is passed on to the appropriate cloud service. Resulting events are then passed to event collection services, similar to on-premise infrastructure. This allows for tight integration with existing security operations and monitoring platforms, and the initial authentication allows all resource requests to be tied to a specific set of user credentials.

Cloud 2 Cloud: A newer option is to have one cloud service – in this case a monitoring service – act as the proxy to another cloud service to tap into user requests and parse out relevant data, meta-data and application calls. Similar to what folks do when they use a managed service for email security, traffic is run through a cloud provider to parse incoming requests before they are passed along to internal/cloud applications. This model can incorporate mobile devices and events – which otherwise will never touch your on-premise networks – because they pass through an inspection point before they are passed along to cloud service providers like Salesforce or Microsoft Azure. This approach allows the SOC to provide real time event analysis, alerting on policy violations, with collected events forwarded to the SOC (either on-prem or in the cloud) for storage. In some cases these cloud services can also add additional security (since the traffic is proxied), such as checks against on-premise identity stores to ensure an employee is still with the company before accessing cloud resources.

App Telemetry: Like cloud providers, mobile carriers, mobile OS providers and handset manufacturers don’t provide much in the way of logging capabilities. The mobile platform is meant to be secured from outsiders and not leak information between apps on the device. However, we are beginning to see mobile apps developed specifically for corporate use, as well as company specific mobile app containers on the mobile device, which send basic telemetry feeds back to the company to provide visibility into device activity. Some telemetry feeds include basic data about the device, such as offering ‘jail-break’ detection, while others append user specific fingerprint data to authorize requests for remote application usage. These capabilities are compiled into the mobile app, or are embedded into the container that protects corporate apps and data. This type of capability is very new and will eventually help support fraud and misuse detection from the mobile end-point.

Agents: While it’s highly unlikely you’ll be deploying any agentry in a SaaS or PaaS clouds, there are cases where agents have their place in hybrid, private and infrastructure as a service (IaaS) clouds when you control the infrastructure. Since the network architecture is virtualized in most clouds, agents offer a means to collect events and configuration information where you don’t have traditional visibility or tap points. Agents can also call out to cloud APIs to check application deployments.

Supplementary Services: Cloud SOCs often rely upon third party intelligence feeds to correlate hostile acts or actors attacking other customers, helping you identify and block their attempts to abuse your systems. These are almost always cloud-based services that provide intelligence, malware analysis, or policies based on a broader analysis of many other sites and data to detect patterns of unwanted behavior. This type of threat intelligence is supplementary to Hybrid SOCs and can help the organization detect potential attacks faster, but is not in and of itself a SOC platform. You can refer to the various threat intelligence papers we’ve written to dig deeper into this topic. (link to threat intel research)

Deployment Strategies

The following are all common deployments of event collectors, monitoring systems and operations centers to support security monitoring:

On-premise: We will forgo a detailed explanation of on-premise SOCs as this is the model that most of you are already familiar with, and we’ve written extensively on this topic. For the most part the infrastructure that provides the ability to monitor a hybrid cloud remains the same. The most significant change is the inclusion of remote cloud, mobile event and configuration data, as well as monitoring policies that are specifically designed to digest remote events. Be prepared for significant change as cloud and mobile event data is not always in the same format, and commonly includes slightly different information from one source to the next. So remember all of that work you had to do a decade ago to get the connectors to properly parse security event data? You’ll be doing that again until a more standard format emerges. It also will require a new round of tuning to the detection rules as seemingly acceptable activities for internal users and systems could be malicious when coming from a remote locations and cloud services.

Hybrid: A hybrid SOC is any deployment model where some of the analysis work is done in-house, and some is done remotely in the “cloud.” The remote portion could be offloaded to a monitoring service vendor, as described in the ‘Cloud 2 Cloud’ model above, or it could be that preliminary ‘level one’ analysis is performed by the managed services team, and advanced analysis forensic analysis is performed by internal resources. Here you will continue to run and operate the existing SIEM, with all of the event collectors, and send a subset of events to an external provider for the heavy lifting of event analysis and forensics. Alternatively you could use the external provider to directly aggregate and analyze remote/cloud activity and send filtered alerts to the on-premise SOC. A hybrid SOC helps agility in address new challenges yet leveraging in house investments and expertise, though there will be a cost to maintain both internal and external monitoring capabilities.

Exclusively Cloud: While still a rarity, it is possible to push all data from both your on-prem and cloud services to a third party for full remote SOC services. This model involves the external SOC providing all data management, analysis, policy development and retention functions. On-prem events are fed through a gateway to the cloud service, with the gateway performing some filtering, compression and security functions to protect the event data.

3rd Party Management: Many large enterprises run their security operations in house, with a team of company employees monitoring systems for attack and performing forensic analysis on suspicious alerts. But not every firm has a sophisticated and capable security team in house to do the difficult and expensive work of writing policies and doing the analysis work. Thus, it’s attractive (and increasingly common) to offload the difficult problems of analysis to others, and just keep a portion of the function to themselves. You have some flexibility in how to engage with the service provider. One approach is to have the service provider take control of your on-premise monitoring systems. Alternatively the 3rd party can supplement what you have by handling just external cloud monitoring. Finally in some cases the entire SOC is pushed to the 3rd party for operations and management.

In our next post we will sketch out what you really need to know we choosing how you want to proceed: The Gotcha’s. All of the problem areas and tradeoffs you must consider prior to selecting data collection and deployment options we scoped above. We are going to dig into problems of scalability, cost, data security, privacy and even some data governance issues that make your decision between solutions a little more difficult.

- Adrian Lane
(0) Comments
Subscribe to our daily email digest

Show more