A growing number of deployed containers means an evolving and expanding ecosystem. It also means more challenges and security considerations for organizations. We’ll dissect a few of those challenges & give you solutions you can use
Many organizations are turning to software containers to facilitate their digital transformations. As defined by Docker, containers are units of software that package an application’s code and all of its dependencies together. Containers ensure consistency; by design, they have specific versions of programming languages, software libraries, and all the other resources their applications need to run regardless of the underlying host infrastructures or OS versions. These software units also virtualize at the OS level, a property which makes them lightweight, faster to start up and less costly to maintain in terms of memory storage.
Acknowledging these benefits, it’s unsurprising that organizations are deploying more and more containers. The Cloud Native Computing Foundation (CNCF) found in a 2019 survey that the percentage of organizations running 250+ containers had grown by 28% since the previous year, for instance. That was the first year when more than half of surveyed organizations told to CNCF they were running so many containers.
But what are the three most common security challenges organizations encounter when working with these types of containers? And what can your business do to mitigate them?
Let’s hash it out.
Container Orchestration and Kubernetes
Larger numbers of containers introduce a challenge: the operational effort that’s needed to run all of those workloads and services. Organizations can’t perform that effort manually. Doing so would be tantamount to needlessly throwing time and money at something that could be automated at a fraction of a cost.
That’s the philosophy behind container orchestration: By driving deployment, provisioning, networking and other elements of a container’s lifecycle, container orchestration simplifies the effort that organizations must expend in managing their containers. It can also help organizations to scale their containerized workloads, notes VMware, thereby boosting the resilience of the apps that those containers are supporting.
CNCF found that most (78%) of organizations are specifically going with Kubernetes to meet their container orchestration needs. An open-source, extensible platform, Kubernetes enables organizations to manage their containers using declarative configuration and automation. It does this by:
- Using load balancing to distribute container network traffic, and by
- Empowering organizations to describe a desired state for their deployed containers within Kubernetes.
The platform then takes that state and adjusts the actual state of the environment back towards the controlled state, all while restarting containers that fail and/or killing containers that don’t respond to a user-defined health check.
The Security Hiccup
There’s just one problem: Kubernetes security incidents are on the rise. In the Fall 2020 edition of its “State of Container and Kubernetes Security” report, StackRox found that 90% of respondents had experienced a security incident involving their container and Kubernetes environments over the course of the previous year. Those experiences and the security concerns they brought up motivated close to half (44%) of survey participants to delay moving their applications into production.
Let’s frame these survey findings another way: many organizations are holding back on their business goals because of their Kubernetes-related security concerns. That’s an unsupportable way to work. Organizations need to have Kubernetes work for them so that they can continue to grow their business. They can’t allow security to set them back.
The only way sustainable forward is for organizations to proactively tackle some of the most common Kubernetes security challenges. This blog post will discuss three common security challenges for Kubernetes and provide best practices that organizations can use to address them. (These challenges are not ranked but listed in alphabetical order.)
Challenge #1: Communication
The entire Kubernetes platform boils down to pods, or groups of one or more containers. Pods are the smallest deployable units that organizations can create and manage within Kubernetes, according to the platform’s documentation. It’s therefore fitting that organizations’ Kubernetes security efforts should begin here.
To begin, there’s the issue of pod communication. Kubernetes’ documentation notes elsewhere that all pods are non-isolated by default and accept traffic from any source. This is a problem in terms of access control. Indeed, malicious actors could abuse that fact to compromise a pod and then use its communication properties to move to all other pods in the same namespace.
Understanding the Utility of Network Policies
Organizations need to shape the ways in which pod communication flows using network policies. Through these application-centric constructs, organizations can specify with which network entities their pods are allowed to communicate. This process involves the use of three identifiers:
- A list of other pods that are allowed to communicate with the selected pod;
- Which virtual clusters or “namespaces” are allowed; and
- Which IP addresses are blocked.
With the help of a selector, organizations can specify what traffic is allowed to and from a pod or a group of pods. This causes the network policy to “select” those resources and reject any connections that emanate from outside the identified namespace. In doing so, the network policy isolates them from other pods within other namespaces.
It’s also important to note that network policies don’t conflict. They are additive. Therefore, if organizations select a pod for more than one network policy, that pod is allowed to communicate according to what’s allowed by the union of those policies.
Some common examples of network policies include the following:
- Default deny all ingress traffic: This type of network policy acts as a default isolation policy for all pods within a selected namespace. It prevents ingress traffic from reaching them — even if they aren’t selected by another network policy. The default egress behavior within the namespace remains the same.
- Default allow all ingress traffic: This type of network policy allows all ingress traffic within a namespace by default. That’ll remain the case even if the organization decides to create additional network policies that select some of that namespace’s pods and impose conditions that isolate them.
- Default deny all egress traffic: Under this type of network policy, organizations select all pods within a namespace by default and block egress traffic from those pods. Enacting this type of network policy doesn’t change the default ingress behavior, however.
- Default allow all egress traffic: Organizations can create a default allow network policy that allows all egress traffic within the namespace by default. The introduction of subsequent network policies that restrict the egress traffic of some of those policies won’t change that default behavior.
- Default deny all ingress and egress all traffic: Finally, organizations can create a default network policy that disallows ingress and egress traffic within the namespace. This will prevent ingress and egress traffic on pods that aren’t even selected by the namespace’s default the network policy.
Challenge #2: Misconfigurations
In the previously cited StackRox’s survey, 67% of respondents said that they had experienced a misconfiguration incident within the past 12 months. Another 22% of survey participants reported a major vulnerability that they needed to remediate. They were followed by those who detected a runtime incident (17%) and those who failed an audit (16%), respectively.
StackRox explained on its blog that the issue here has to do with network complexity:
“[I]n a sprawling Kubernetes environment with several clusters spanning tens, hundreds, or even thousands of nodes, created by hundreds of different developers, manually checking the configurations is not feasible. And like all humans, developers can make mistakes – especially given that Kubernetes configuration options are complicated [and] security features are not enabled by default….”
Complexity is dangerous because it gives attackers the option of using misconfigured settings to establish a foothold in organizations’ environments and to leverage that access to prey upon their sensitive data. Hence the reason why organizations need to take care to properly configure their Kubernetes in a way that coheres with their security requirements. Using network policies to restrict pod communication is a key part of that, but organizations also need to securely configure the pods themselves using pod security policies (PSPs).
How Pod Security Policies Work
According to Kubernetes’ documentation, a pod security policy is a cluster-level resource that defines the conditions under which a pod is allowed to run within a system. A PSP encapsulates what are known as “security contexts,” definitions of privilege and access control for a pod or container.
These security contexts include Linux Capabilities, or whether a pod/container receives some privileges but not those of a root user, and AllowPrivilegeEscalation, which specifies whether a pod or container can obtain more privileges than its parent process.
Organizations can best wield PSPs by creating a policy that matches what’s contained in their pods. Towards this end, they can create three types of PSPs:
- Privileged: PSPs that are privileged are unrestricted in the sense that they allow for the most permissions among any of the other PSP types. Typically aimed at system- and infrastructure-level services managed by trusted users, privileged PSPs allow for privilege escalation. Organizations can create this type of policy by not applying any constraints rather than creating a specific policy.
- Baseline/Default: PSPs that fall under this category carry a few more restrictions that prevent privilege escalation. Typically, application operators and developers of non-critical applications use these types of PSPs.
- Restricted: Last but not least, restricted PSPs are heavily limited. They are geared towards operators and developers of security-critical apps as well as low-trust users.
Challenge #3: Runtime Threats
Finally, organizations need to be aware of runtime security threats. Tripwire explains that runtime security happens in real time. Subsequently, a compromised container can execute another process that affects other containers or systems if not the larger enterprise network.
Organizations can tackle those challenges by monitoring security-relevant container activities. In particular, they should pay attention to network traffic in order to limit unnecessary or insecure communication, as specified by their network policies. They can then use what they find to tighten the conditions of their network policies even further.
But their work doesn’t end there. Organizations can partner this monitoring and visibility with process allow lists that help to identify unexpected running container processes. Moreover, they can monitor running deployments for new vulnerabilities and scan container images for existing security weaknesses in order to minimize their attack surface.
Awareness Goes a Long Way
As the above discussion demonstrates, Kubernetes does come with its fair share of security challenges. But organizations can use proper awareness and the platform’s built-in features to face these obstacles and strengthen their security.
For more information about how to secure Kubernetes, check out these security documents.