This is the second installment in our four-part RKE security blog series. Don’t forget to follow along to our upcoming posts in the series:
Part 2 — Rancher Kubernetes Engine (RKE) Security Best Practices for Authentication, Authorization, and Cluster Access
If an organization uses Rancher 2.X to manage its Kubernetes clusters, it can use several centralized user authentication options. Rancher 2.X extends RKE’s functionality with RKE templates and additional authorization capabilities with Kubernetes-native Role-based Access Control (RBAC). It is highly recommended to manage RKE clusters through Rancher 2.X to simplify user and group access management.
With RKE, every cluster is deployed with a kube-admin user account and a cluster certificate for access. Similar to most Kubernetes clusters, RKE gives users an account with full access to the cluster. Part of the cluster creation process is to remove this default account and set standards for applications and users moving forward.
Kubernetes RBAC provides the standard method for managing authorization for the Kubernetes API endpoints. The practice of creating and managing comprehensive RBAC roles should follow the principle of least privilege and provide some of the most critical protections possible for your RKE clusters. By utilizing the principle of least privilege and regular audits, you can limit what bad actors can do, internal misconfigurations, and operational errors.
With RKE, Kubernetes RBAC is the only authorization mode that is supported. This simplifies the process since RBAC is the default and most sensible choice for managing Kubernetes clusters.
authorization: mode: rbac # Use \\`mode: none\\` to disable authorization
The management of Roles and ClusterRoles can be a complicated topic since organizations and teams are structured differently. However, some general guidelines will make administration more effortless. When working with RKE, create all necessary RBAC resource objects for the cluster workloads and test them in a non-production environment. Teams can apply these policies using the cluster.yml file during the cluster setup process. Rancher has an add-on capability that can be used to create single namespaces or whole applications in your new Kubernetes Cluster.
addons: |- --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: developer-role rules: – apiGroups: [“*”] resources: – deployments – configmaps – pods – services verbs: – get – list – watch
Or specify an external repo:
addons_include: |- https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/rook-operator.yaml/\ path/to/manifest
Once your organization has a solid working knowledge of RBAC, create some internal policies and guidelines. Make sure you also regularly audit your Role permissions and RoleBindings. Pay special attention to minimizing ClusterRoles and ClusterRoleBindings, as these apply globally across all namespaces and to resources that do not scope to namespaces. (You can use the output of kubectl api-resources in your cluster to see which resources are not namespace-scoped.)
By default, a service account is mounted to every pod in an RKE cluster, allowing containers to send requests to the Kubernetes API server. An attacker who gains access to a pod can obtain the corresponding service account token. With RBAC being the default authorization mode for an RKE cluster, service account privileges are determined by role bindings. If these grant elevated privileges, an attacker could send a request to the Kubernetes API server to compromise cluster resources.
Organizations can mitigate this threat vector by configuring Kubernetes RBAC and adopting the least privilege model for service accounts and their role bindings. However, recognize that there is always friction when implementing read-only defaults for accounts. Try to set up an organizational process where various teams can request permission publicly. This may help to streamline how administrative permissions are configured across groups.
In 2019, Kubernetes experienced the Billion Laughs vulnerability or CVE2019-11253. This CVE allowed a potential attacker to target the Kubernetes API with billions of requests and blocking authenticated traffic from making requests to the Kubernetes API server. To address this issue, the EventRateLimit admission was developed. This admission controller enforces a limit on the number of events that the API server will accept in a given time. With small teams on a private network, this may not be a serious consideration, however, scaling and automation can also contribute to excess requests to the API server. Therefore, it is recommended to limit the rate of events that the API server will accept. RKE allows for rate limits to be configured by server, namespace, user, or a combination of a source and an object.
services: kube-api: event_rate_limit: enabled: true configuration: apiVersion: eventratelimit.admission.k8s.io/v1alpha1 kind: Configuration limits: - type: Server qps: 6000 burst: 30000
limits: - type: Namespace qps: 50 burst: 100 cacheSize: 2000
Where qps is the number of event queries per second allowed for this type of limit, burst is the burst number of event queries allowed, and cacheSize is the size of the least recently used (LRU) cache for this type of limit.
Kubernetes and Rancher clusters rely on several secure certificate chains and credentials for security. If sensitive keys or certificates are compromised, the entire cluster’s integrity and workloads are at risk. Additionally, many security policies and compliance certifications require regular rotation of encryption keys and credentials.
RKE leverages REST-based HTTPS communication with encryption via TLS certificates. These certificates are configured during installation for the components that require HTTPS traffic:
RKE allows its operators the ability to rotate the certificates of any of the RKE components. The operator requires the original
cluster.yml file and rotate the certificates by running
rke cert rotate. If organizations opt to manage their clusters through Rancher 2.X, admins can rotate cluster certificates directly through the UI.