Cloud security operations teams, especially ones that are looking at security technologies for the first time, are often faced with a daunting list of vendors who offer technologies with wide-ranging capabilities. Understanding the pros and cons of each might seem difficult or impossible at first, especially because the enterprise security sector is inundated with technologies that address security from a defense in depth perspective, offering different technologies at each layer. These include Firewalls, VPNs, IDS, IPS, log collection tools, SIEM tools, routers and switches with security capabilities, endpoint security tools, vulnerability management tools, threat management tools, etc.
This post is aimed at clarifying the functions of the available security technologies, as well as their strengths and weaknesses, so you can start to assess the security requirements in your organization’s cloud environment and identify the technology that best meets your needs. It is specifically intended to assist people who are just starting to evaluate cloud security products as well as those who are re-architecting their cloud environments to achieve stronger security, better efficiencies, or scaling in the cloud.
Security Technologies in the Cloud
Security technologies can be placed at different locations in the cloud; each location requires its own type of technology, and each technology has its specific advantages and disadvantages as shown in the following table.
Location
Technology Examples & Functions
Advantages
Disadvantages
Host (Kernel space)
Agent in the kernel space, collecting user, process, network, and file actions.
An agent in the kernel space is easy to develop because it has control over all aspects of the workload and can do anything to the workload.
The kernel is a bad place to run the agent:
Performance and stability issues exist.
Many exploits insert themselves as kernel modules, so it’s hard to distinguish an agent from an exploit.
Host (User space)
Agent in the user space, collecting user, process, network, and file actions.
An agent in the user space can be controlled according to the user’s needs and would have minimum impact on the workload.
An agent in the user space is difficult to develop.
Host (Logs)
Log collectors, ELK stack
Monitoring logs is an easy way to collect data from the workloads, albeit for mostly application-level visibility.
Logging can be hard to configure because of the amount of tuning that’s required to obtain valuable data. Logs can be manipulated easily, and therefore, are not a “single source of truth.” Logs are difficult to maintain because the application signatures change.
Host (File Monitoring)
Modern agent-based FIM tools, based on non-file hashing functionality.
Traditional agent-based FIM tools, based on file hashing functionality.
FIM provides visibility into the access and manipulation of sensitive customer and configuration data.
Traditional FIM tools are resource intensive.
However, cloud-based, lightweight FIM technologies are available.
Host, Network
Vulnerability monitoring and management.
Provide visibility into package-level vulnerabilities or bad configuration state of the workload.
Generally these produce a large number of false positives.
Network
Network visibility from inside the host.
VPC flow logs.
Traditional Network IDS tools.
Network-level visibility gives a view into who is “knocking at your door” (reconnaissance) and the type of attacks being attempted (sophisticated nation state or script kiddies).
Obtaining deep packet-level visibility is CPU intensive.
Encrypted intra and inter host traffic makes it impossible to do deep packet analysis.
Network
VPCs, Security groups, subnets
Placing cloud assets into the right infrastructure segments with effective segmentation would prevent easy insider and external access into deeper parts of the environment.
None
Infrastructure
CloudTrail logs
Provide an awareness of the state of the infrastructure.
Provide visibility into API calls.
Have to be contextualized with the host.
Internet edge
VPC Flow logs
Provide visibility into attacker recon activity.
Provide visibility into data loss (transfer of large data).
Information at the VPC level will not have information about which specific host is being enumerated.
Host, Network
Threat Intelligence
Comparing activity inside the cloud environment with known IOCs is an effective, deterministic way to understand and analyze a breach.
Requires sophisticated users.
The following is a high-level snapshot of security technologies used at various locations in the cloud:
Guidelines for Evaluating Cloud Security Technologies
After reviewing the types of technologies that can be placed at different locations in the cloud, you will be able to understand the merits of the various point solutions that are available as well as the value of more modern “tools of the trade” such as integrated, cloud-native platforms.
As you complete your assessment of cloud security technologies, take the following into account:
Strategic objectives should drive the selection of appropriate technologies for your organization. Two of the things you should consider are segmentation and the ability to collect data as follows:
Part of your selection process should involve providers who have an out-of-the-box tool set and specifically one that enables segmentation.
Collecting user actions, process actions, network actions, and actions on sensitive files is the next important item to tackle. Choosing vendors that provide a single agent that gathers all of the above information and analytics around processing the data would eliminate the need to buy several other tools, including network IDS, log, and FIM tools.
Taking an “inside out” approach (starting security at the Host level) is better for security in the cloud than an outside in approach because:
Cloud workloads are started with a known baseline of characteristics. You know exactly what each workload is and how it should behave, so it is relatively easy to catch “unknowns” happening in the workloads.
The user space in the workload is a much better place to capture workload-level visibility (users, processes, packages) than the kernel space.
Network-level visibility is essential for cloud workloads. However, deep packet inspection is very expensive. Capturing who the cloud workloads are talking to and where they are accepting connections from and comparing them with a baseline is a good 80% solution for cloud workloads.
Infrastructure visibility is essential, and monitoring the API calls to the infrastructure is a good way of gaining visibility into the security state of the infrastructure.
Developing the skill set needed to analyze threat actors could be overkill for most cloud customers, but automating the process of comparing cloud activity with known indicators of compromise will give confidence to cloud security operations regarding a possible breach of cloud assets.
Conclusion
Selecting security technologies for the cloud is very different from selecting technologies for traditional enterprise environments because cloud-based workloads are codified and easy to baseline. As with other aspects of the cloud, “a lot can be done with a little.” Selecting a single tool that provides host-level and infrastructure-level visibility would solve several critical use cases, and therefore is a much better strategy than implementing several disparate point solutions.
Ideally — to achieve maximum visibility and process efficiency — you would want to consider an integrated, cloud native platform that can knit together critical security event information in one place, and automatically provide the contextual data required for rapid incident response.